00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2372 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3637 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.147 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.148 The recommended git tool is: git 00:00:00.148 using credential 00000000-0000-0000-0000-000000000002 00:00:00.151 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.195 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.227 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.587 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.599 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.612 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.612 > git config core.sparsecheckout # timeout=10 00:00:06.624 > git read-tree -mu HEAD # timeout=10 00:00:06.645 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.667 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.668 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.753 [Pipeline] Start of Pipeline 00:00:06.764 [Pipeline] library 00:00:06.766 Loading library shm_lib@master 00:00:06.766 Library shm_lib@master is cached. Copying from home. 00:00:06.783 [Pipeline] node 00:00:06.819 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.821 [Pipeline] { 00:00:06.832 [Pipeline] catchError 00:00:06.834 [Pipeline] { 00:00:06.847 [Pipeline] wrap 00:00:06.857 [Pipeline] { 00:00:06.867 [Pipeline] stage 00:00:06.869 [Pipeline] { (Prologue) 00:00:07.108 [Pipeline] sh 00:00:08.027 + logger -p user.info -t JENKINS-CI 00:00:08.059 [Pipeline] echo 00:00:08.061 Node: WFP21 00:00:08.069 [Pipeline] sh 00:00:08.411 [Pipeline] setCustomBuildProperty 00:00:08.419 [Pipeline] echo 00:00:08.421 Cleanup processes 00:00:08.425 [Pipeline] sh 00:00:08.716 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.716 5284 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.734 [Pipeline] sh 00:00:09.035 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.035 ++ grep -v 'sudo pgrep' 00:00:09.035 ++ awk '{print $1}' 00:00:09.035 + sudo kill -9 00:00:09.035 + true 00:00:09.053 [Pipeline] cleanWs 00:00:09.066 [WS-CLEANUP] Deleting project workspace... 00:00:09.066 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.081 [WS-CLEANUP] done 00:00:09.086 [Pipeline] setCustomBuildProperty 00:00:09.103 [Pipeline] sh 00:00:09.398 + sudo git config --global --replace-all safe.directory '*' 00:00:09.494 [Pipeline] httpRequest 00:00:11.347 [Pipeline] echo 00:00:11.348 Sorcerer 10.211.164.20 is alive 00:00:11.358 [Pipeline] retry 00:00:11.360 [Pipeline] { 00:00:11.375 [Pipeline] httpRequest 00:00:11.381 HttpMethod: GET 00:00:11.381 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.382 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.396 Response Code: HTTP/1.1 200 OK 00:00:11.396 Success: Status code 200 is in the accepted range: 200,404 00:00:11.397 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.493 [Pipeline] } 00:00:14.506 [Pipeline] // retry 00:00:14.512 [Pipeline] sh 00:00:14.803 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.819 [Pipeline] httpRequest 00:00:15.189 [Pipeline] echo 00:00:15.191 Sorcerer 10.211.164.20 is alive 00:00:15.200 [Pipeline] retry 00:00:15.202 [Pipeline] { 00:00:15.216 [Pipeline] httpRequest 00:00:15.222 HttpMethod: GET 00:00:15.222 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.223 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.239 Response Code: HTTP/1.1 200 OK 00:00:15.239 Success: Status code 200 is in the accepted range: 200,404 00:00:15.240 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:26.570 [Pipeline] } 00:01:26.594 [Pipeline] // retry 00:01:26.605 [Pipeline] sh 00:01:26.912 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:29.476 [Pipeline] sh 00:01:29.770 + git -C spdk log --oneline -n5 00:01:29.770 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:29.770 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:29.770 4bcab9fb9 correct kick for CQ full case 00:01:29.770 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:29.770 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:29.792 [Pipeline] withCredentials 00:01:29.806 > git --version # timeout=10 00:01:29.819 > git --version # 'git version 2.39.2' 00:01:29.850 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.853 [Pipeline] { 00:01:29.863 [Pipeline] retry 00:01:29.866 [Pipeline] { 00:01:29.884 [Pipeline] sh 00:01:30.447 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:30.722 [Pipeline] } 00:01:30.741 [Pipeline] // retry 00:01:30.747 [Pipeline] } 00:01:30.764 [Pipeline] // withCredentials 00:01:30.778 [Pipeline] httpRequest 00:01:31.159 [Pipeline] echo 00:01:31.161 Sorcerer 10.211.164.20 is alive 00:01:31.173 [Pipeline] retry 00:01:31.176 [Pipeline] { 00:01:31.194 [Pipeline] httpRequest 00:01:31.199 HttpMethod: GET 00:01:31.199 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.200 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.204 Response Code: HTTP/1.1 200 OK 00:01:31.204 Success: Status code 200 is in the accepted range: 200,404 00:01:31.205 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.985 [Pipeline] } 00:01:38.004 [Pipeline] // retry 00:01:38.013 [Pipeline] sh 00:01:38.311 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.709 [Pipeline] sh 00:01:40.000 + git -C dpdk log --oneline -n5 00:01:40.000 caf0f5d395 version: 22.11.4 00:01:40.000 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.000 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.000 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.000 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.011 [Pipeline] } 00:01:40.026 [Pipeline] // stage 00:01:40.036 [Pipeline] stage 00:01:40.038 [Pipeline] { (Prepare) 00:01:40.061 [Pipeline] writeFile 00:01:40.078 [Pipeline] sh 00:01:40.372 + logger -p user.info -t JENKINS-CI 00:01:40.386 [Pipeline] sh 00:01:40.674 + logger -p user.info -t JENKINS-CI 00:01:40.686 [Pipeline] sh 00:01:40.974 + cat autorun-spdk.conf 00:01:40.974 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.974 SPDK_TEST_NVMF=1 00:01:40.974 SPDK_TEST_NVME_CLI=1 00:01:40.974 SPDK_TEST_NVMF_NICS=mlx5 00:01:40.974 SPDK_RUN_UBSAN=1 00:01:40.974 NET_TYPE=phy 00:01:40.974 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.974 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.983 RUN_NIGHTLY=1 00:01:40.988 [Pipeline] readFile 00:01:41.025 [Pipeline] withEnv 00:01:41.027 [Pipeline] { 00:01:41.039 [Pipeline] sh 00:01:41.329 + set -ex 00:01:41.330 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:41.330 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.330 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.330 ++ SPDK_TEST_NVMF=1 00:01:41.330 ++ SPDK_TEST_NVME_CLI=1 00:01:41.330 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.330 ++ SPDK_RUN_UBSAN=1 00:01:41.330 ++ NET_TYPE=phy 00:01:41.330 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.330 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.330 ++ RUN_NIGHTLY=1 00:01:41.330 + case $SPDK_TEST_NVMF_NICS in 00:01:41.330 + DRIVERS=mlx5_ib 00:01:41.330 + [[ -n mlx5_ib ]] 00:01:41.330 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.330 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:47.912 rmmod: ERROR: Module irdma is not currently loaded 00:01:47.912 rmmod: ERROR: Module i40iw is not currently loaded 00:01:47.912 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:47.912 + true 00:01:47.912 + for D in $DRIVERS 00:01:47.912 + sudo modprobe mlx5_ib 00:01:47.912 + exit 0 00:01:47.922 [Pipeline] } 00:01:47.936 [Pipeline] // withEnv 00:01:47.939 [Pipeline] } 00:01:47.950 [Pipeline] // stage 00:01:47.957 [Pipeline] catchError 00:01:47.959 [Pipeline] { 00:01:47.970 [Pipeline] timeout 00:01:47.970 Timeout set to expire in 1 hr 0 min 00:01:47.972 [Pipeline] { 00:01:47.983 [Pipeline] stage 00:01:47.985 [Pipeline] { (Tests) 00:01:47.997 [Pipeline] sh 00:01:48.288 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.288 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.288 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:48.288 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:48.288 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.288 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.288 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:48.288 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.288 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.288 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.288 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:48.288 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.288 + source /etc/os-release 00:01:48.288 ++ NAME='Fedora Linux' 00:01:48.288 ++ VERSION='39 (Cloud Edition)' 00:01:48.288 ++ ID=fedora 00:01:48.288 ++ VERSION_ID=39 00:01:48.288 ++ VERSION_CODENAME= 00:01:48.288 ++ PLATFORM_ID=platform:f39 00:01:48.288 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.288 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.288 ++ LOGO=fedora-logo-icon 00:01:48.288 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.288 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.288 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.288 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.288 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.288 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.288 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.288 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.288 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.288 ++ SUPPORT_END=2024-11-12 00:01:48.288 ++ VARIANT='Cloud Edition' 00:01:48.288 ++ VARIANT_ID=cloud 00:01:48.288 + uname -a 00:01:48.288 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.288 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:51.589 Hugepages 00:01:51.589 node hugesize free / total 00:01:51.589 node0 1048576kB 0 / 0 00:01:51.589 node0 2048kB 0 / 0 00:01:51.589 node1 1048576kB 0 / 0 00:01:51.589 node1 2048kB 0 / 0 00:01:51.589 00:01:51.589 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.589 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:51.589 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:51.589 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:51.589 + rm -f /tmp/spdk-ld-path 00:01:51.589 + source autorun-spdk.conf 00:01:51.589 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.589 ++ SPDK_TEST_NVMF=1 00:01:51.589 ++ SPDK_TEST_NVME_CLI=1 00:01:51.589 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:51.589 ++ SPDK_RUN_UBSAN=1 00:01:51.589 ++ NET_TYPE=phy 00:01:51.589 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.589 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.589 ++ RUN_NIGHTLY=1 00:01:51.589 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.589 + [[ -n '' ]] 00:01:51.589 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:51.589 + for M in /var/spdk/build-*-manifest.txt 00:01:51.589 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:51.589 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.589 + for M in /var/spdk/build-*-manifest.txt 00:01:51.589 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.589 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.589 + for M in /var/spdk/build-*-manifest.txt 00:01:51.589 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.589 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.589 ++ uname 00:01:51.589 + [[ Linux == \L\i\n\u\x ]] 00:01:51.589 + sudo dmesg -T 00:01:51.589 + sudo dmesg --clear 00:01:51.589 + dmesg_pid=6771 00:01:51.589 + [[ Fedora Linux == FreeBSD ]] 00:01:51.589 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.589 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.589 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.589 + sudo dmesg -Tw 00:01:51.589 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.589 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.589 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.589 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.589 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.589 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.589 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.589 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.589 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.589 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.589 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.589 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:51.589 04:18:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:51.589 04:18:30 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:51.589 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.590 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.590 04:18:30 -- nvmf-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:51.590 04:18:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:51.590 04:18:30 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:51.590 04:18:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:51.590 04:18:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:51.590 04:18:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:51.590 04:18:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.590 04:18:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.590 04:18:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.590 04:18:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.590 04:18:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.590 04:18:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.590 04:18:30 -- paths/export.sh@5 -- $ export PATH 00:01:51.590 04:18:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.590 04:18:30 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:51.590 04:18:30 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:51.590 04:18:30 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731813510.XXXXXX 00:01:51.590 04:18:30 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731813510.3vakUI 00:01:51.590 04:18:30 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:51.590 04:18:30 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:51.590 04:18:30 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.590 04:18:30 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:51.590 04:18:30 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:51.590 04:18:30 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.590 04:18:30 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:51.590 04:18:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:51.590 04:18:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.850 04:18:30 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:51.850 04:18:30 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:51.850 04:18:30 -- pm/common@17 -- $ local monitor 00:01:51.850 04:18:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.850 04:18:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.850 04:18:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.850 04:18:30 -- pm/common@21 -- $ date +%s 00:01:51.850 04:18:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.850 04:18:30 -- pm/common@21 -- $ date +%s 00:01:51.850 04:18:30 -- pm/common@25 -- $ sleep 1 00:01:51.850 04:18:30 -- pm/common@21 -- $ date +%s 00:01:51.850 04:18:30 -- pm/common@21 -- $ date +%s 00:01:51.850 04:18:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731813510 00:01:51.850 04:18:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731813510 00:01:51.850 04:18:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731813510 00:01:51.850 04:18:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731813510 00:01:51.850 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731813510_collect-cpu-load.pm.log 00:01:51.850 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731813510_collect-vmstat.pm.log 00:01:51.850 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731813510_collect-cpu-temp.pm.log 00:01:51.850 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731813510_collect-bmc-pm.bmc.pm.log 00:01:52.791 04:18:31 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:52.791 04:18:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.791 04:18:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.791 04:18:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:52.791 04:18:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.791 Sun Nov 17 03:18:31 AM UTC 2024 00:01:52.791 04:18:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.791 v25.01-pre-189-g83e8405e4 00:01:52.791 04:18:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:52.791 04:18:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.791 04:18:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.791 04:18:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.791 04:18:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.791 04:18:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.791 ************************************ 00:01:52.791 START TEST ubsan 00:01:52.791 ************************************ 00:01:52.792 04:18:31 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:52.792 using ubsan 00:01:52.792 00:01:52.792 real 0m0.001s 00:01:52.792 user 0m0.000s 00:01:52.792 sys 0m0.000s 00:01:52.792 04:18:31 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:52.792 04:18:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.792 ************************************ 00:01:52.792 END TEST ubsan 00:01:52.792 ************************************ 00:01:52.792 04:18:31 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:52.792 04:18:31 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:52.792 04:18:31 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:52.792 04:18:31 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:52.792 04:18:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.792 04:18:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.792 ************************************ 00:01:52.792 START TEST build_native_dpdk 00:01:52.792 ************************************ 00:01:52.792 04:18:31 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:52.792 04:18:31 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:53.052 04:18:31 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:53.052 04:18:31 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:53.052 04:18:31 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:53.052 04:18:31 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:53.052 04:18:31 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:53.053 caf0f5d395 version: 22.11.4 00:01:53.053 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:53.053 dc9c799c7d vhost: fix missing spinlock unlock 00:01:53.053 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:53.053 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:53.053 patching file config/rte_config.h 00:01:53.053 Hunk #1 succeeded at 60 (offset 1 line). 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:53.053 patching file lib/pcapng/rte_pcapng.c 00:01:53.053 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:53.053 04:18:31 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:53.053 04:18:31 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:53.054 04:18:31 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:53.054 04:18:31 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:53.054 04:18:31 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:53.054 04:18:31 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:53.054 04:18:31 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.640 The Meson build system 00:01:59.640 Version: 1.5.0 00:01:59.640 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:59.640 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:59.640 Build type: native build 00:01:59.640 Program cat found: YES (/usr/bin/cat) 00:01:59.640 Project name: DPDK 00:01:59.640 Project version: 22.11.4 00:01:59.640 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.640 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:59.640 Host machine cpu family: x86_64 00:01:59.640 Host machine cpu: x86_64 00:01:59.640 Message: ## Building in Developer Mode ## 00:01:59.640 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.640 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:59.640 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.640 Program objdump found: YES (/usr/bin/objdump) 00:01:59.640 Program python3 found: YES (/usr/bin/python3) 00:01:59.640 Program cat found: YES (/usr/bin/cat) 00:01:59.640 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.640 Checking for size of "void *" : 8 00:01:59.640 Checking for size of "void *" : 8 (cached) 00:01:59.640 Library m found: YES 00:01:59.640 Library numa found: YES 00:01:59.640 Has header "numaif.h" : YES 00:01:59.640 Library fdt found: NO 00:01:59.640 Library execinfo found: NO 00:01:59.640 Has header "execinfo.h" : YES 00:01:59.640 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.640 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.640 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.640 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.640 Run-time dependency openssl found: YES 3.1.1 00:01:59.640 Run-time dependency libpcap found: YES 1.10.4 00:01:59.641 Has header "pcap.h" with dependency libpcap: YES 00:01:59.641 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.641 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.641 Compiler for C supports arguments -Wformat: YES 00:01:59.641 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.641 Compiler for C supports arguments -Wformat-security: NO 00:01:59.641 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.641 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.641 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.641 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.641 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.641 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.641 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.641 Compiler for C supports arguments -Wundef: YES 00:01:59.641 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.641 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.641 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.641 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.641 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.641 Compiler for C supports arguments -mavx512f: YES 00:01:59.641 Checking if "AVX512 checking" compiles: YES 00:01:59.641 Fetching value of define "__SSE4_2__" : 1 00:01:59.641 Fetching value of define "__AES__" : 1 00:01:59.641 Fetching value of define "__AVX__" : 1 00:01:59.641 Fetching value of define "__AVX2__" : 1 00:01:59.641 Fetching value of define "__AVX512BW__" : 1 00:01:59.641 Fetching value of define "__AVX512CD__" : 1 00:01:59.641 Fetching value of define "__AVX512DQ__" : 1 00:01:59.641 Fetching value of define "__AVX512F__" : 1 00:01:59.641 Fetching value of define "__AVX512VL__" : 1 00:01:59.641 Fetching value of define "__PCLMUL__" : 1 00:01:59.641 Fetching value of define "__RDRND__" : 1 00:01:59.641 Fetching value of define "__RDSEED__" : 1 00:01:59.641 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.641 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.641 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.641 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.641 Checking for function "getentropy" : YES 00:01:59.641 Message: lib/eal: Defining dependency "eal" 00:01:59.641 Message: lib/ring: Defining dependency "ring" 00:01:59.641 Message: lib/rcu: Defining dependency "rcu" 00:01:59.641 Message: lib/mempool: Defining dependency "mempool" 00:01:59.641 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.641 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.641 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.641 Compiler for C supports arguments -mpclmul: YES 00:01:59.641 Compiler for C supports arguments -maes: YES 00:01:59.641 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.641 Compiler for C supports arguments -mavx512bw: YES 00:01:59.641 Compiler for C supports arguments -mavx512dq: YES 00:01:59.641 Compiler for C supports arguments -mavx512vl: YES 00:01:59.641 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.641 Compiler for C supports arguments -mavx2: YES 00:01:59.641 Compiler for C supports arguments -mavx: YES 00:01:59.641 Message: lib/net: Defining dependency "net" 00:01:59.641 Message: lib/meter: Defining dependency "meter" 00:01:59.641 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.641 Message: lib/pci: Defining dependency "pci" 00:01:59.641 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.641 Message: lib/metrics: Defining dependency "metrics" 00:01:59.641 Message: lib/hash: Defining dependency "hash" 00:01:59.641 Message: lib/timer: Defining dependency "timer" 00:01:59.641 Fetching value of define "__AVX2__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.641 Message: lib/acl: Defining dependency "acl" 00:01:59.641 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.641 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.641 Run-time dependency libelf found: YES 0.191 00:01:59.641 Message: lib/bpf: Defining dependency "bpf" 00:01:59.641 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.641 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.641 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.641 Message: lib/distributor: Defining dependency "distributor" 00:01:59.641 Message: lib/efd: Defining dependency "efd" 00:01:59.641 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.641 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.641 Message: lib/gro: Defining dependency "gro" 00:01:59.641 Message: lib/gso: Defining dependency "gso" 00:01:59.641 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.641 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.641 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.641 Message: lib/lpm: Defining dependency "lpm" 00:01:59.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.641 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.641 Message: lib/member: Defining dependency "member" 00:01:59.641 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.641 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.641 Message: lib/power: Defining dependency "power" 00:01:59.641 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.641 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.641 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.641 Message: lib/rib: Defining dependency "rib" 00:01:59.641 Message: lib/reorder: Defining dependency "reorder" 00:01:59.641 Message: lib/sched: Defining dependency "sched" 00:01:59.641 Message: lib/security: Defining dependency "security" 00:01:59.641 Message: lib/stack: Defining dependency "stack" 00:01:59.641 Has header "linux/userfaultfd.h" : YES 00:01:59.641 Message: lib/vhost: Defining dependency "vhost" 00:01:59.641 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.641 Message: lib/fib: Defining dependency "fib" 00:01:59.641 Message: lib/port: Defining dependency "port" 00:01:59.641 Message: lib/pdump: Defining dependency "pdump" 00:01:59.641 Message: lib/table: Defining dependency "table" 00:01:59.641 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.641 Message: lib/graph: Defining dependency "graph" 00:01:59.641 Message: lib/node: Defining dependency "node" 00:01:59.641 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.641 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.641 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.641 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.641 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:59.641 Compiler for C supports arguments -Wno-unused-value: YES 00:01:59.641 Compiler for C supports arguments -Wno-format: YES 00:01:59.641 Compiler for C supports arguments -Wno-format-security: YES 00:01:59.641 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.212 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.212 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.212 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.212 Fetching value of define "__AVX2__" : 1 (cached) 00:02:00.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.212 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.212 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.212 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.212 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.212 Configuring doxy-api.conf using configuration 00:02:00.212 Program sphinx-build found: NO 00:02:00.212 Configuring rte_build_config.h using configuration 00:02:00.212 Message: 00:02:00.212 ================= 00:02:00.212 Applications Enabled 00:02:00.212 ================= 00:02:00.212 00:02:00.212 apps: 00:02:00.212 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:00.212 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:00.212 test-security-perf, 00:02:00.212 00:02:00.212 Message: 00:02:00.212 ================= 00:02:00.212 Libraries Enabled 00:02:00.212 ================= 00:02:00.212 00:02:00.212 libs: 00:02:00.212 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:00.212 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:00.212 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:00.212 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:00.212 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:00.212 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:00.212 table, pipeline, graph, node, 00:02:00.212 00:02:00.212 Message: 00:02:00.212 =============== 00:02:00.212 Drivers Enabled 00:02:00.212 =============== 00:02:00.212 00:02:00.212 common: 00:02:00.212 00:02:00.212 bus: 00:02:00.212 pci, vdev, 00:02:00.212 mempool: 00:02:00.212 ring, 00:02:00.212 dma: 00:02:00.212 00:02:00.212 net: 00:02:00.212 i40e, 00:02:00.212 raw: 00:02:00.212 00:02:00.212 crypto: 00:02:00.212 00:02:00.212 compress: 00:02:00.212 00:02:00.212 regex: 00:02:00.212 00:02:00.212 vdpa: 00:02:00.212 00:02:00.212 event: 00:02:00.212 00:02:00.212 baseband: 00:02:00.212 00:02:00.212 gpu: 00:02:00.212 00:02:00.212 00:02:00.212 Message: 00:02:00.212 ================= 00:02:00.212 Content Skipped 00:02:00.212 ================= 00:02:00.212 00:02:00.212 apps: 00:02:00.212 00:02:00.212 libs: 00:02:00.212 kni: explicitly disabled via build config (deprecated lib) 00:02:00.212 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:00.212 00:02:00.212 drivers: 00:02:00.212 common/cpt: not in enabled drivers build config 00:02:00.212 common/dpaax: not in enabled drivers build config 00:02:00.212 common/iavf: not in enabled drivers build config 00:02:00.212 common/idpf: not in enabled drivers build config 00:02:00.212 common/mvep: not in enabled drivers build config 00:02:00.212 common/octeontx: not in enabled drivers build config 00:02:00.212 bus/auxiliary: not in enabled drivers build config 00:02:00.212 bus/dpaa: not in enabled drivers build config 00:02:00.212 bus/fslmc: not in enabled drivers build config 00:02:00.212 bus/ifpga: not in enabled drivers build config 00:02:00.212 bus/vmbus: not in enabled drivers build config 00:02:00.212 common/cnxk: not in enabled drivers build config 00:02:00.212 common/mlx5: not in enabled drivers build config 00:02:00.212 common/qat: not in enabled drivers build config 00:02:00.212 common/sfc_efx: not in enabled drivers build config 00:02:00.212 mempool/bucket: not in enabled drivers build config 00:02:00.212 mempool/cnxk: not in enabled drivers build config 00:02:00.212 mempool/dpaa: not in enabled drivers build config 00:02:00.212 mempool/dpaa2: not in enabled drivers build config 00:02:00.212 mempool/octeontx: not in enabled drivers build config 00:02:00.212 mempool/stack: not in enabled drivers build config 00:02:00.212 dma/cnxk: not in enabled drivers build config 00:02:00.212 dma/dpaa: not in enabled drivers build config 00:02:00.212 dma/dpaa2: not in enabled drivers build config 00:02:00.212 dma/hisilicon: not in enabled drivers build config 00:02:00.212 dma/idxd: not in enabled drivers build config 00:02:00.212 dma/ioat: not in enabled drivers build config 00:02:00.212 dma/skeleton: not in enabled drivers build config 00:02:00.212 net/af_packet: not in enabled drivers build config 00:02:00.212 net/af_xdp: not in enabled drivers build config 00:02:00.212 net/ark: not in enabled drivers build config 00:02:00.212 net/atlantic: not in enabled drivers build config 00:02:00.212 net/avp: not in enabled drivers build config 00:02:00.212 net/axgbe: not in enabled drivers build config 00:02:00.212 net/bnx2x: not in enabled drivers build config 00:02:00.212 net/bnxt: not in enabled drivers build config 00:02:00.212 net/bonding: not in enabled drivers build config 00:02:00.212 net/cnxk: not in enabled drivers build config 00:02:00.212 net/cxgbe: not in enabled drivers build config 00:02:00.212 net/dpaa: not in enabled drivers build config 00:02:00.212 net/dpaa2: not in enabled drivers build config 00:02:00.212 net/e1000: not in enabled drivers build config 00:02:00.212 net/ena: not in enabled drivers build config 00:02:00.212 net/enetc: not in enabled drivers build config 00:02:00.212 net/enetfec: not in enabled drivers build config 00:02:00.212 net/enic: not in enabled drivers build config 00:02:00.212 net/failsafe: not in enabled drivers build config 00:02:00.212 net/fm10k: not in enabled drivers build config 00:02:00.212 net/gve: not in enabled drivers build config 00:02:00.212 net/hinic: not in enabled drivers build config 00:02:00.212 net/hns3: not in enabled drivers build config 00:02:00.212 net/iavf: not in enabled drivers build config 00:02:00.212 net/ice: not in enabled drivers build config 00:02:00.212 net/idpf: not in enabled drivers build config 00:02:00.212 net/igc: not in enabled drivers build config 00:02:00.212 net/ionic: not in enabled drivers build config 00:02:00.212 net/ipn3ke: not in enabled drivers build config 00:02:00.212 net/ixgbe: not in enabled drivers build config 00:02:00.212 net/kni: not in enabled drivers build config 00:02:00.212 net/liquidio: not in enabled drivers build config 00:02:00.212 net/mana: not in enabled drivers build config 00:02:00.212 net/memif: not in enabled drivers build config 00:02:00.212 net/mlx4: not in enabled drivers build config 00:02:00.212 net/mlx5: not in enabled drivers build config 00:02:00.212 net/mvneta: not in enabled drivers build config 00:02:00.212 net/mvpp2: not in enabled drivers build config 00:02:00.212 net/netvsc: not in enabled drivers build config 00:02:00.212 net/nfb: not in enabled drivers build config 00:02:00.212 net/nfp: not in enabled drivers build config 00:02:00.212 net/ngbe: not in enabled drivers build config 00:02:00.212 net/null: not in enabled drivers build config 00:02:00.212 net/octeontx: not in enabled drivers build config 00:02:00.212 net/octeon_ep: not in enabled drivers build config 00:02:00.212 net/pcap: not in enabled drivers build config 00:02:00.212 net/pfe: not in enabled drivers build config 00:02:00.212 net/qede: not in enabled drivers build config 00:02:00.212 net/ring: not in enabled drivers build config 00:02:00.212 net/sfc: not in enabled drivers build config 00:02:00.212 net/softnic: not in enabled drivers build config 00:02:00.212 net/tap: not in enabled drivers build config 00:02:00.212 net/thunderx: not in enabled drivers build config 00:02:00.212 net/txgbe: not in enabled drivers build config 00:02:00.212 net/vdev_netvsc: not in enabled drivers build config 00:02:00.212 net/vhost: not in enabled drivers build config 00:02:00.212 net/virtio: not in enabled drivers build config 00:02:00.212 net/vmxnet3: not in enabled drivers build config 00:02:00.212 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.212 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.212 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.212 raw/ifpga: not in enabled drivers build config 00:02:00.212 raw/ntb: not in enabled drivers build config 00:02:00.213 raw/skeleton: not in enabled drivers build config 00:02:00.213 crypto/armv8: not in enabled drivers build config 00:02:00.213 crypto/bcmfs: not in enabled drivers build config 00:02:00.213 crypto/caam_jr: not in enabled drivers build config 00:02:00.213 crypto/ccp: not in enabled drivers build config 00:02:00.213 crypto/cnxk: not in enabled drivers build config 00:02:00.213 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.213 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.213 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.213 crypto/mlx5: not in enabled drivers build config 00:02:00.213 crypto/mvsam: not in enabled drivers build config 00:02:00.213 crypto/nitrox: not in enabled drivers build config 00:02:00.213 crypto/null: not in enabled drivers build config 00:02:00.213 crypto/octeontx: not in enabled drivers build config 00:02:00.213 crypto/openssl: not in enabled drivers build config 00:02:00.213 crypto/scheduler: not in enabled drivers build config 00:02:00.213 crypto/uadk: not in enabled drivers build config 00:02:00.213 crypto/virtio: not in enabled drivers build config 00:02:00.213 compress/isal: not in enabled drivers build config 00:02:00.213 compress/mlx5: not in enabled drivers build config 00:02:00.213 compress/octeontx: not in enabled drivers build config 00:02:00.213 compress/zlib: not in enabled drivers build config 00:02:00.213 regex/mlx5: not in enabled drivers build config 00:02:00.213 regex/cn9k: not in enabled drivers build config 00:02:00.213 vdpa/ifc: not in enabled drivers build config 00:02:00.213 vdpa/mlx5: not in enabled drivers build config 00:02:00.213 vdpa/sfc: not in enabled drivers build config 00:02:00.213 event/cnxk: not in enabled drivers build config 00:02:00.213 event/dlb2: not in enabled drivers build config 00:02:00.213 event/dpaa: not in enabled drivers build config 00:02:00.213 event/dpaa2: not in enabled drivers build config 00:02:00.213 event/dsw: not in enabled drivers build config 00:02:00.213 event/opdl: not in enabled drivers build config 00:02:00.213 event/skeleton: not in enabled drivers build config 00:02:00.213 event/sw: not in enabled drivers build config 00:02:00.213 event/octeontx: not in enabled drivers build config 00:02:00.213 baseband/acc: not in enabled drivers build config 00:02:00.213 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.213 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.213 baseband/la12xx: not in enabled drivers build config 00:02:00.213 baseband/null: not in enabled drivers build config 00:02:00.213 baseband/turbo_sw: not in enabled drivers build config 00:02:00.213 gpu/cuda: not in enabled drivers build config 00:02:00.213 00:02:00.213 00:02:00.213 Build targets in project: 311 00:02:00.213 00:02:00.213 DPDK 22.11.4 00:02:00.213 00:02:00.213 User defined options 00:02:00.213 libdir : lib 00:02:00.213 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:00.213 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.213 c_link_args : 00:02:00.213 enable_docs : false 00:02:00.213 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.213 enable_kmods : false 00:02:00.213 machine : native 00:02:00.213 tests : false 00:02:00.213 00:02:00.213 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.213 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.213 04:18:38 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:00.213 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:00.479 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:00.479 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:00.479 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:00.479 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:00.479 [5/740] Generating lib/rte_eal_mingw with a custom command 00:02:00.479 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.479 [7/740] Generating lib/rte_ring_mingw with a custom command 00:02:00.479 [8/740] Generating lib/rte_rcu_mingw with a custom command 00:02:00.479 [9/740] Generating lib/rte_eal_def with a custom command 00:02:00.479 [10/740] Generating lib/rte_ring_def with a custom command 00:02:00.479 [11/740] Generating lib/rte_mempool_def with a custom command 00:02:00.479 [12/740] Generating lib/rte_mbuf_def with a custom command 00:02:00.479 [13/740] Generating lib/rte_net_def with a custom command 00:02:00.479 [14/740] Generating lib/rte_meter_def with a custom command 00:02:00.479 [15/740] Generating lib/rte_meter_mingw with a custom command 00:02:00.479 [16/740] Generating lib/rte_rcu_def with a custom command 00:02:00.479 [17/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:00.479 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.479 [19/740] Generating lib/rte_mempool_mingw with a custom command 00:02:00.479 [20/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.479 [21/740] Generating lib/rte_net_mingw with a custom command 00:02:00.479 [22/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.479 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.479 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.479 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.479 [26/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:00.479 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.479 [28/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.479 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.479 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.479 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.479 [32/740] Generating lib/rte_ethdev_def with a custom command 00:02:00.479 [33/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:00.479 [34/740] Generating lib/rte_pci_mingw with a custom command 00:02:00.479 [35/740] Generating lib/rte_pci_def with a custom command 00:02:00.740 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.740 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.740 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.740 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.740 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.740 [41/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.740 [42/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.740 [43/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.740 [44/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.740 [45/740] Linking static target lib/librte_kvargs.a 00:02:00.740 [46/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.740 [47/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.740 [48/740] Generating lib/rte_metrics_mingw with a custom command 00:02:00.740 [49/740] Generating lib/rte_cmdline_def with a custom command 00:02:00.740 [50/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:00.740 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.740 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.740 [53/740] Generating lib/rte_metrics_def with a custom command 00:02:00.740 [54/740] Generating lib/rte_hash_def with a custom command 00:02:00.740 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.740 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.740 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.740 [58/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.740 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.740 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.740 [61/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.740 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.740 [63/740] Generating lib/rte_timer_mingw with a custom command 00:02:00.740 [64/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.740 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.740 [66/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.740 [67/740] Generating lib/rte_timer_def with a custom command 00:02:00.740 [68/740] Generating lib/rte_hash_mingw with a custom command 00:02:00.740 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.740 [70/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.740 [71/740] Generating lib/rte_acl_def with a custom command 00:02:00.740 [72/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.740 [73/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.740 [74/740] Generating lib/rte_acl_mingw with a custom command 00:02:00.740 [75/740] Generating lib/rte_bbdev_def with a custom command 00:02:00.740 [76/740] Linking static target lib/librte_pci.a 00:02:00.740 [77/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.740 [78/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:00.740 [79/740] Generating lib/rte_bitratestats_def with a custom command 00:02:00.740 [80/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:00.740 [81/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.740 [82/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.740 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.740 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.740 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.740 [86/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.740 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.740 [88/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.740 [89/740] Generating lib/rte_bpf_def with a custom command 00:02:00.740 [90/740] Generating lib/rte_bpf_mingw with a custom command 00:02:00.740 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.740 [92/740] Generating lib/rte_cfgfile_def with a custom command 00:02:00.740 [93/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:00.740 [94/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.740 [95/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.740 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.740 [97/740] Generating lib/rte_compressdev_def with a custom command 00:02:00.740 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.740 [99/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.740 [100/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.740 [101/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.740 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.740 [103/740] Linking static target lib/librte_meter.a 00:02:00.740 [104/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:00.740 [105/740] Linking static target lib/librte_ring.a 00:02:00.740 [106/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.740 [107/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.740 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:00.740 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.740 [110/740] Generating lib/rte_cryptodev_def with a custom command 00:02:00.740 [111/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:00.740 [112/740] Generating lib/rte_distributor_def with a custom command 00:02:00.740 [113/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.740 [114/740] Generating lib/rte_distributor_mingw with a custom command 00:02:00.740 [115/740] Generating lib/rte_efd_def with a custom command 00:02:00.740 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.740 [117/740] Generating lib/rte_efd_mingw with a custom command 00:02:00.740 [118/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.740 [119/740] Generating lib/rte_eventdev_def with a custom command 00:02:00.740 [120/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:01.010 [121/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.010 [122/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.010 [123/740] Generating lib/rte_gpudev_def with a custom command 00:02:01.010 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:01.010 [125/740] Generating lib/rte_gro_def with a custom command 00:02:01.010 [126/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:01.010 [127/740] Generating lib/rte_gro_mingw with a custom command 00:02:01.010 [128/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.010 [129/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.010 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.010 [131/740] Generating lib/rte_gso_def with a custom command 00:02:01.010 [132/740] Generating lib/rte_gso_mingw with a custom command 00:02:01.010 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.010 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.010 [135/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [136/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.010 [138/740] Generating lib/rte_ip_frag_def with a custom command 00:02:01.010 [139/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:01.010 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.010 [141/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.274 [142/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:01.274 [143/740] Generating lib/rte_jobstats_def with a custom command 00:02:01.274 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.274 [145/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:01.274 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.274 [147/740] Linking target lib/librte_kvargs.so.23.0 00:02:01.274 [148/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.274 [149/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.274 [150/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:01.274 [151/740] Generating lib/rte_latencystats_def with a custom command 00:02:01.274 [152/740] Linking static target lib/librte_cfgfile.a 00:02:01.274 [153/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.274 [154/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.274 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.274 [156/740] Generating lib/rte_lpm_def with a custom command 00:02:01.274 [157/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.274 [158/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.274 [159/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.274 [160/740] Generating lib/rte_lpm_mingw with a custom command 00:02:01.274 [161/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.274 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.274 [163/740] Generating lib/rte_member_def with a custom command 00:02:01.274 [164/740] Generating lib/rte_member_mingw with a custom command 00:02:01.274 [165/740] Generating lib/rte_pcapng_def with a custom command 00:02:01.274 [166/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:01.274 [167/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.274 [168/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.274 [169/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.274 [170/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:01.274 [171/740] Linking static target lib/librte_timer.a 00:02:01.274 [172/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.274 [173/740] Linking static target lib/librte_jobstats.a 00:02:01.274 [174/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.274 [175/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.274 [176/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.274 [177/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.274 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.274 [179/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:01.274 [180/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.274 [181/740] Linking static target lib/librte_cmdline.a 00:02:01.274 [182/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.274 [183/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:01.274 [184/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.274 [185/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:01.274 [186/740] Generating lib/rte_power_def with a custom command 00:02:01.274 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.274 [188/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:01.274 [189/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:01.274 [190/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.274 [191/740] Generating lib/rte_power_mingw with a custom command 00:02:01.274 [192/740] Generating lib/rte_rawdev_def with a custom command 00:02:01.274 [193/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:01.274 [194/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.274 [195/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.274 [196/740] Linking static target lib/librte_metrics.a 00:02:01.274 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:01.274 [198/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.274 [199/740] Linking static target lib/librte_telemetry.a 00:02:01.274 [200/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:01.274 [201/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.274 [202/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.274 [203/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.535 [204/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:01.535 [205/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.535 [206/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.535 [207/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.535 [208/740] Generating lib/rte_regexdev_def with a custom command 00:02:01.535 [209/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:01.535 [210/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:01.535 [211/740] Generating lib/rte_dmadev_def with a custom command 00:02:01.535 [212/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.535 [213/740] Linking static target lib/librte_net.a 00:02:01.535 [214/740] Generating lib/rte_rib_def with a custom command 00:02:01.535 [215/740] Generating lib/rte_rib_mingw with a custom command 00:02:01.535 [216/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.535 [217/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.535 [218/740] Generating lib/rte_reorder_mingw with a custom command 00:02:01.535 [219/740] Generating lib/rte_reorder_def with a custom command 00:02:01.535 [220/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:01.535 [221/740] Generating lib/rte_sched_def with a custom command 00:02:01.535 [222/740] Generating lib/rte_sched_mingw with a custom command 00:02:01.535 [223/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:01.535 [224/740] Generating lib/rte_security_mingw with a custom command 00:02:01.535 [225/740] Generating lib/rte_security_def with a custom command 00:02:01.535 [226/740] Linking static target lib/librte_bitratestats.a 00:02:01.535 [227/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:01.535 [228/740] Generating lib/rte_stack_mingw with a custom command 00:02:01.535 [229/740] Generating lib/rte_stack_def with a custom command 00:02:01.535 [230/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.535 [231/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.535 [232/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.535 [233/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.535 [234/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:01.535 [235/740] Generating lib/rte_vhost_def with a custom command 00:02:01.535 [236/740] Generating lib/rte_vhost_mingw with a custom command 00:02:01.535 [237/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.535 [238/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.535 [239/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:01.535 [240/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:01.535 [241/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:01.535 [242/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:01.535 [243/740] Generating lib/rte_ipsec_def with a custom command 00:02:01.535 [244/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.535 [245/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:01.535 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:01.535 [247/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:01.535 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:01.535 [249/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.535 [250/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:01.535 [251/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:01.535 [252/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:01.535 [253/740] Linking static target lib/librte_stack.a 00:02:01.535 [254/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:01.535 [255/740] Generating lib/rte_fib_mingw with a custom command 00:02:01.535 [256/740] Generating lib/rte_fib_def with a custom command 00:02:01.535 [257/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:01.535 [258/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:01.535 [259/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:01.535 [260/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.535 [261/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.802 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:01.802 [263/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.802 [264/740] Linking static target lib/librte_compressdev.a 00:02:01.802 [265/740] Generating lib/rte_port_def with a custom command 00:02:01.802 [266/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.802 [267/740] Generating lib/rte_port_mingw with a custom command 00:02:01.802 [268/740] Generating lib/rte_pdump_def with a custom command 00:02:01.802 [269/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.802 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:01.802 [271/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.802 [272/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:01.802 [273/740] Generating lib/rte_pdump_mingw with a custom command 00:02:01.802 [274/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.802 [275/740] Linking static target lib/librte_rcu.a 00:02:01.802 [276/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:01.802 [277/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.802 [278/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:01.802 [279/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:01.802 [280/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:01.802 [281/740] Linking static target lib/librte_mempool.a 00:02:01.802 [282/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:01.802 [283/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:01.802 [284/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:01.802 [285/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:01.802 [286/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.802 [287/740] Linking static target lib/librte_rawdev.a 00:02:01.802 [288/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:01.802 [289/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:01.802 [290/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.802 [291/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.802 [292/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:01.802 [293/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:01.802 [294/740] Linking static target lib/librte_gro.a 00:02:01.802 [295/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.802 [296/740] Linking static target lib/librte_gpudev.a 00:02:01.802 [297/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:01.802 [298/740] Linking static target lib/librte_dmadev.a 00:02:01.802 [299/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.802 [300/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:01.802 [301/740] Linking static target lib/librte_bbdev.a 00:02:01.802 [302/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:01.802 [303/740] Generating lib/rte_table_mingw with a custom command 00:02:01.802 [304/740] Generating lib/rte_table_def with a custom command 00:02:01.802 [305/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.067 [306/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.067 [307/740] Generating lib/rte_pipeline_def with a custom command 00:02:02.067 [308/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:02.067 [309/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:02.067 [310/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.067 [311/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:02.068 [312/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:02.068 [313/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.068 [314/740] Linking static target lib/librte_gso.a 00:02:02.068 [315/740] Linking static target lib/librte_latencystats.a 00:02:02.068 [316/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.068 [317/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:02.068 [318/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:02.068 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:02.068 [320/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.068 [321/740] Generating lib/rte_graph_mingw with a custom command 00:02:02.068 [322/740] Generating lib/rte_graph_def with a custom command 00:02:02.068 [323/740] Linking target lib/librte_telemetry.so.23.0 00:02:02.068 [324/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:02.068 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.068 [326/740] Linking static target lib/librte_distributor.a 00:02:02.068 [327/740] Linking static target lib/librte_ip_frag.a 00:02:02.068 [328/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:02.068 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:02.068 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.068 [331/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:02.068 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:02.330 [333/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:02.330 [334/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.330 [335/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:02.330 [336/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:02.330 [337/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:02.330 [338/740] Linking static target lib/librte_regexdev.a 00:02:02.330 [339/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:02.330 [340/740] Generating lib/rte_node_def with a custom command 00:02:02.330 [341/740] Generating lib/rte_node_mingw with a custom command 00:02:02.330 [342/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.330 [343/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.330 [344/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:02.330 [345/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.330 [346/740] Linking static target lib/librte_reorder.a 00:02:02.330 [347/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.330 [348/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.330 [349/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:02.330 [350/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.330 [351/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.330 [352/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:02.330 [353/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.330 [354/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:02.330 [355/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:02.330 [356/740] Linking static target lib/librte_power.a 00:02:02.330 [357/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:02.330 [358/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:02.330 [359/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.330 [360/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:02.330 [361/740] Linking static target lib/librte_eal.a 00:02:02.330 [362/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:02.330 [363/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:02.330 [364/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.330 [365/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:02.330 [366/740] Linking static target lib/librte_pcapng.a 00:02:02.330 [367/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.330 [368/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:02.330 [369/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.330 [370/740] Linking static target lib/librte_security.a 00:02:02.330 [371/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:02.330 [372/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:02.330 [373/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:02.594 [374/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.594 [375/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:02.594 [376/740] Linking static target lib/librte_mbuf.a 00:02:02.594 [377/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:02.594 [378/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:02.594 [379/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.594 [380/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.594 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:02.594 [382/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:02.594 [383/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:02.594 [384/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.594 [385/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.594 [386/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:02.594 [387/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:02.594 [388/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:02.594 [389/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:02.594 [390/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:02.594 [391/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:02.594 [392/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.594 [393/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.594 [394/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:02.594 [395/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:02.594 [396/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:02.594 [397/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:02.594 [398/740] Linking static target lib/librte_bpf.a 00:02:02.594 [399/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:02.594 [400/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.594 [401/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:02.594 [402/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.594 [403/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:02.594 [404/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:02.594 [405/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.594 [406/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:02.594 [407/740] Linking static target lib/librte_rib.a 00:02:02.594 [408/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:02.594 [409/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:02.856 [410/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:02.856 [411/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:02.856 [412/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:02.856 [413/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:02.856 [414/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:02.856 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:02.856 [416/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:02.856 [417/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.856 [418/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.856 [419/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:02.856 [420/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:02.856 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:02.856 [422/740] Linking static target lib/librte_lpm.a 00:02:02.856 [423/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:02.856 [424/740] Linking static target lib/librte_graph.a 00:02:02.856 [425/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:02.856 [426/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.856 [427/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.856 [428/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.856 [429/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.856 [430/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:02.856 [431/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:02.856 [432/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:02.856 [433/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:02.856 [434/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:02.856 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:02.856 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:02.856 [437/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:02.856 [438/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:02.856 [439/740] Linking static target lib/librte_efd.a 00:02:03.122 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:03.122 [441/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:03.122 [442/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:03.122 [443/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.122 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.122 [445/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.122 [446/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:03.122 [447/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:03.122 [448/740] Linking static target lib/librte_fib.a 00:02:03.122 [449/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:03.122 [450/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:03.122 [451/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:03.122 [452/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:03.122 [453/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.122 [454/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:03.122 [455/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.122 [456/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.122 [457/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:03.384 [458/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.384 [459/740] Linking static target drivers/librte_bus_vdev.a 00:02:03.384 [460/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:03.384 [461/740] Linking static target lib/librte_pdump.a 00:02:03.384 [462/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:03.384 [463/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [464/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.384 [465/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:03.384 [466/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [467/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:03.384 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:03.384 [469/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:03.384 [470/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [471/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [472/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:03.384 [473/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.384 [474/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [475/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:03.384 [476/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:03.384 [477/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.384 [478/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:03.384 [479/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.384 [480/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.384 [481/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:03.649 [482/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:03.649 [483/740] Linking static target drivers/librte_bus_pci.a 00:02:03.649 [484/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:03.649 [485/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:03.649 [486/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:03.649 [487/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:03.649 [488/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:03.649 [489/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:03.649 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.649 [491/740] Linking static target lib/librte_table.a 00:02:03.649 [492/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:03.649 [493/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:03.649 [494/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:03.649 [495/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:03.649 [496/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:03.649 [497/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:03.649 [498/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.911 [499/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:03.911 [500/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:03.911 [501/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.911 [502/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:03.911 [503/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:03.911 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:03.911 [505/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:03.911 [506/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:03.911 [507/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:03.911 [508/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:03.911 [509/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:03.911 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:03.911 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:03.911 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:03.911 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:03.911 [514/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.911 [515/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:03.911 [516/740] Linking static target lib/librte_cryptodev.a 00:02:03.911 [517/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:03.911 [518/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:03.911 [519/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.911 [520/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.911 [521/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:03.911 [522/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:03.911 [523/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:03.911 [524/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:03.911 [525/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:03.911 [526/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:03.911 [527/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:03.911 [528/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:03.911 [529/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:03.911 [530/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:03.911 [531/740] Linking static target lib/librte_sched.a 00:02:04.171 [532/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:04.171 [533/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.171 [534/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:04.171 [535/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:04.171 [536/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:04.171 [537/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:04.171 [538/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:04.171 [539/740] Linking static target lib/librte_node.a 00:02:04.171 [540/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:04.171 [541/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:04.171 [542/740] Linking static target lib/librte_ipsec.a 00:02:04.171 [543/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:04.171 [544/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:04.171 [545/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:04.171 [546/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:04.171 [547/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.171 [548/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.171 [549/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.171 [550/740] Linking static target lib/librte_ethdev.a 00:02:04.171 [551/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.171 [552/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.171 [553/740] Linking static target drivers/librte_mempool_ring.a 00:02:04.171 [554/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:04.171 [555/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:04.171 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:04.171 [557/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:04.171 [558/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:04.171 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:04.431 [560/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:04.431 [561/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:04.431 [562/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:04.431 [563/740] Linking static target lib/librte_member.a 00:02:04.431 [564/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:04.431 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:04.431 [566/740] Linking static target lib/librte_port.a 00:02:04.431 [567/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:04.431 [568/740] Linking static target lib/librte_eventdev.a 00:02:04.431 [569/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.431 [570/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:04.431 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:04.431 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:04.431 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:04.431 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:04.431 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:04.431 [576/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:04.431 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:04.431 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:04.431 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:04.431 [580/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:04.431 [581/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:04.431 [582/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:04.431 [583/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:04.431 [584/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.431 [585/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:04.431 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:04.431 [587/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:04.431 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:04.431 [589/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.431 [590/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:04.431 [591/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.431 [592/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:04.691 [593/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:04.691 [594/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:04.691 [595/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.691 [596/740] Linking static target lib/librte_hash.a 00:02:04.691 [597/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:04.691 [598/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.691 [599/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:04.691 [600/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:04.951 [601/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:04.951 [602/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:04.951 [603/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:04.951 [604/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:04.951 [605/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:04.951 [606/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:04.951 [607/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:04.951 [608/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:04.951 [609/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:04.951 [610/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:05.210 [611/740] Linking static target lib/librte_acl.a 00:02:05.210 [612/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:05.210 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.469 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:05.469 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:05.469 [616/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.469 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:05.729 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.299 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:06.299 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:06.299 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:06.868 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:06.868 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:07.127 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:07.128 [625/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.128 [626/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.128 [627/740] Linking static target drivers/librte_net_i40e.a 00:02:07.388 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:07.652 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.652 [630/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.911 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:07.911 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:08.481 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.761 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.761 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.761 [636/740] Linking static target lib/librte_vhost.a 00:02:14.702 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:14.702 [638/740] Linking static target lib/librte_pipeline.a 00:02:14.962 [639/740] Linking target app/dpdk-test-acl 00:02:14.962 [640/740] Linking target app/dpdk-proc-info 00:02:14.962 [641/740] Linking target app/dpdk-dumpcap 00:02:14.962 [642/740] Linking target app/dpdk-test-fib 00:02:14.962 [643/740] Linking target app/dpdk-test-cmdline 00:02:14.962 [644/740] Linking target app/dpdk-test-gpudev 00:02:14.962 [645/740] Linking target app/dpdk-test-compress-perf 00:02:14.962 [646/740] Linking target app/dpdk-pdump 00:02:14.962 [647/740] Linking target app/dpdk-test-sad 00:02:14.962 [648/740] Linking target app/dpdk-test-crypto-perf 00:02:14.962 [649/740] Linking target app/dpdk-test-pipeline 00:02:14.962 [650/740] Linking target app/dpdk-test-eventdev 00:02:14.962 [651/740] Linking target app/dpdk-test-regex 00:02:14.962 [652/740] Linking target app/dpdk-test-bbdev 00:02:14.962 [653/740] Linking target app/dpdk-test-flow-perf 00:02:14.962 [654/740] Linking target app/dpdk-test-security-perf 00:02:14.962 [655/740] Linking target app/dpdk-testpmd 00:02:15.903 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.474 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.734 [658/740] Linking target lib/librte_eal.so.23.0 00:02:16.734 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:16.734 [660/740] Linking target lib/librte_meter.so.23.0 00:02:16.734 [661/740] Linking target lib/librte_ring.so.23.0 00:02:16.734 [662/740] Linking target lib/librte_pci.so.23.0 00:02:16.734 [663/740] Linking target lib/librte_timer.so.23.0 00:02:16.734 [664/740] Linking target lib/librte_jobstats.so.23.0 00:02:16.993 [665/740] Linking target lib/librte_dmadev.so.23.0 00:02:16.993 [666/740] Linking target lib/librte_rawdev.so.23.0 00:02:16.993 [667/740] Linking target lib/librte_cfgfile.so.23.0 00:02:16.993 [668/740] Linking target lib/librte_stack.so.23.0 00:02:16.993 [669/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:16.993 [670/740] Linking target lib/librte_graph.so.23.0 00:02:16.993 [671/740] Linking target lib/librte_acl.so.23.0 00:02:16.993 [672/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:16.993 [673/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:16.993 [674/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:16.993 [675/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:16.993 [676/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:16.993 [677/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:16.993 [678/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:16.994 [679/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:16.994 [680/740] Linking target lib/librte_mempool.so.23.0 00:02:16.994 [681/740] Linking target lib/librte_rcu.so.23.0 00:02:16.994 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:17.254 [683/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:17.254 [684/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:17.254 [685/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:17.254 [686/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:17.254 [687/740] Linking target lib/librte_rib.so.23.0 00:02:17.254 [688/740] Linking target lib/librte_mbuf.so.23.0 00:02:17.254 [689/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:17.254 [690/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:17.514 [691/740] Linking target lib/librte_regexdev.so.23.0 00:02:17.514 [692/740] Linking target lib/librte_compressdev.so.23.0 00:02:17.514 [693/740] Linking target lib/librte_bbdev.so.23.0 00:02:17.514 [694/740] Linking target lib/librte_reorder.so.23.0 00:02:17.514 [695/740] Linking target lib/librte_net.so.23.0 00:02:17.514 [696/740] Linking target lib/librte_fib.so.23.0 00:02:17.514 [697/740] Linking target lib/librte_gpudev.so.23.0 00:02:17.514 [698/740] Linking target lib/librte_distributor.so.23.0 00:02:17.514 [699/740] Linking target lib/librte_sched.so.23.0 00:02:17.514 [700/740] Linking target lib/librte_cryptodev.so.23.0 00:02:17.514 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:17.514 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:17.514 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:17.514 [704/740] Linking target lib/librte_cmdline.so.23.0 00:02:17.514 [705/740] Linking target lib/librte_hash.so.23.0 00:02:17.514 [706/740] Linking target lib/librte_ethdev.so.23.0 00:02:17.514 [707/740] Linking target lib/librte_security.so.23.0 00:02:17.774 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:17.774 [709/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:17.774 [710/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:17.774 [711/740] Linking target lib/librte_efd.so.23.0 00:02:17.774 [712/740] Linking target lib/librte_metrics.so.23.0 00:02:17.774 [713/740] Linking target lib/librte_member.so.23.0 00:02:17.774 [714/740] Linking target lib/librte_lpm.so.23.0 00:02:17.774 [715/740] Linking target lib/librte_gro.so.23.0 00:02:17.774 [716/740] Linking target lib/librte_power.so.23.0 00:02:17.774 [717/740] Linking target lib/librte_pcapng.so.23.0 00:02:17.774 [718/740] Linking target lib/librte_gso.so.23.0 00:02:17.774 [719/740] Linking target lib/librte_ip_frag.so.23.0 00:02:17.774 [720/740] Linking target lib/librte_bpf.so.23.0 00:02:17.774 [721/740] Linking target lib/librte_vhost.so.23.0 00:02:17.774 [722/740] Linking target lib/librte_ipsec.so.23.0 00:02:17.774 [723/740] Linking target lib/librte_eventdev.so.23.0 00:02:17.774 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:18.034 [725/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:18.034 [726/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:18.034 [727/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:18.034 [728/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:18.034 [729/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:18.034 [730/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:18.034 [731/740] Linking target lib/librte_bitratestats.so.23.0 00:02:18.034 [732/740] Linking target lib/librte_latencystats.so.23.0 00:02:18.034 [733/740] Linking target lib/librte_node.so.23.0 00:02:18.034 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:18.034 [735/740] Linking target lib/librte_port.so.23.0 00:02:18.294 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:18.294 [737/740] Linking target lib/librte_table.so.23.0 00:02:18.294 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:19.676 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.936 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:19.936 04:18:58 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:19.936 04:18:58 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:19.936 04:18:58 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:19.936 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:19.936 [0/1] Installing files. 00:02:20.200 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.200 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:20.201 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.202 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.203 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:20.204 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:20.205 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:20.205 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.205 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.206 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.206 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.206 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.469 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:20.470 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:20.470 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:20.470 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.470 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:20.470 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.470 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.471 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.472 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:20.473 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:20.473 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:20.473 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:20.473 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:20.473 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:20.473 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:20.473 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:20.473 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:20.473 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:20.473 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:20.473 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:20.473 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:20.473 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:20.473 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:20.473 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:20.473 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:20.473 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:20.473 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:20.473 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:20.473 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:20.474 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:20.474 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:20.474 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:20.474 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:20.474 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:20.474 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:20.474 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:20.474 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:20.474 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:20.474 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:20.474 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:20.474 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:20.474 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:20.474 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:20.474 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:20.474 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:20.474 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:20.474 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:20.474 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:20.474 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:20.474 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:20.474 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:20.474 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:20.474 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:20.474 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:20.474 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:20.474 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:20.474 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:20.474 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:20.474 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:20.474 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:20.474 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:20.474 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:20.474 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:20.474 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:20.474 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:20.474 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:20.474 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:20.474 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:20.474 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:20.474 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:20.474 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:20.474 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:20.474 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:20.474 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:20.474 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:20.474 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:20.474 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:20.474 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:20.474 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:20.474 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:20.474 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:20.474 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:20.474 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:20.474 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:20.474 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:20.474 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:20.474 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:20.474 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:20.474 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:20.474 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:20.474 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:20.474 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:20.474 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:20.474 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:20.474 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:20.474 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:20.474 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:20.474 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:20.474 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:20.474 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:20.474 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:20.474 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:20.474 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:20.474 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:20.474 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:20.474 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:20.474 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:20.474 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:20.474 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:20.474 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:20.474 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:20.474 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:20.474 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:20.474 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:20.475 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:20.475 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:20.475 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:20.475 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:20.475 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:20.475 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:20.475 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:20.475 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:20.475 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:20.475 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:20.475 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:20.475 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:20.475 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:20.475 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:20.475 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:20.475 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:20.475 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:20.475 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:20.475 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:20.475 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:20.475 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:20.735 04:18:59 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:20.735 04:18:59 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:20.735 00:02:20.735 real 0m27.753s 00:02:20.735 user 6m37.787s 00:02:20.735 sys 2m17.849s 00:02:20.735 04:18:59 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:20.735 04:18:59 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:20.735 ************************************ 00:02:20.735 END TEST build_native_dpdk 00:02:20.735 ************************************ 00:02:20.735 04:18:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:20.735 04:18:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:20.735 04:18:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:20.735 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:20.995 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:20.995 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:20.995 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:21.565 Using 'verbs' RDMA provider 00:02:37.402 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:49.631 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:50.473 Creating mk/config.mk...done. 00:02:50.473 Creating mk/cc.flags.mk...done. 00:02:50.473 Type 'make' to build. 00:02:50.473 04:19:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:50.473 04:19:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.473 04:19:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.473 04:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.473 ************************************ 00:02:50.473 START TEST make 00:02:50.473 ************************************ 00:02:50.473 04:19:29 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:51.045 make[1]: Nothing to be done for 'all'. 00:03:23.149 CC lib/log/log.o 00:03:23.149 CC lib/log/log_flags.o 00:03:23.149 CC lib/log/log_deprecated.o 00:03:23.149 CC lib/ut/ut.o 00:03:23.149 CC lib/ut_mock/mock.o 00:03:23.149 LIB libspdk_log.a 00:03:23.149 LIB libspdk_ut.a 00:03:23.149 LIB libspdk_ut_mock.a 00:03:23.149 SO libspdk_log.so.7.1 00:03:23.149 SO libspdk_ut_mock.so.6.0 00:03:23.149 SO libspdk_ut.so.2.0 00:03:23.149 SYMLINK libspdk_log.so 00:03:23.149 SYMLINK libspdk_ut_mock.so 00:03:23.149 SYMLINK libspdk_ut.so 00:03:23.149 CC lib/dma/dma.o 00:03:23.149 CC lib/ioat/ioat.o 00:03:23.149 CC lib/util/base64.o 00:03:23.149 CC lib/util/bit_array.o 00:03:23.149 CC lib/util/cpuset.o 00:03:23.149 CC lib/util/crc16.o 00:03:23.149 CC lib/util/crc32.o 00:03:23.149 CC lib/util/crc32c.o 00:03:23.149 CC lib/util/crc32_ieee.o 00:03:23.149 CC lib/util/crc64.o 00:03:23.149 CC lib/util/dif.o 00:03:23.149 CC lib/util/fd.o 00:03:23.149 CC lib/util/fd_group.o 00:03:23.149 CC lib/util/file.o 00:03:23.149 CXX lib/trace_parser/trace.o 00:03:23.149 CC lib/util/hexlify.o 00:03:23.149 CC lib/util/iov.o 00:03:23.149 CC lib/util/math.o 00:03:23.149 CC lib/util/net.o 00:03:23.149 CC lib/util/pipe.o 00:03:23.149 CC lib/util/strerror_tls.o 00:03:23.149 CC lib/util/string.o 00:03:23.149 CC lib/util/uuid.o 00:03:23.149 CC lib/util/xor.o 00:03:23.149 CC lib/util/zipf.o 00:03:23.149 CC lib/util/md5.o 00:03:23.149 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.149 CC lib/vfio_user/host/vfio_user.o 00:03:23.149 LIB libspdk_dma.a 00:03:23.149 SO libspdk_dma.so.5.0 00:03:23.149 LIB libspdk_ioat.a 00:03:23.149 SYMLINK libspdk_dma.so 00:03:23.149 SO libspdk_ioat.so.7.0 00:03:23.149 SYMLINK libspdk_ioat.so 00:03:23.149 LIB libspdk_vfio_user.a 00:03:23.149 LIB libspdk_util.a 00:03:23.149 SO libspdk_vfio_user.so.5.0 00:03:23.149 SO libspdk_util.so.10.1 00:03:23.149 SYMLINK libspdk_vfio_user.so 00:03:23.149 SYMLINK libspdk_util.so 00:03:23.149 CC lib/json/json_parse.o 00:03:23.149 CC lib/json/json_util.o 00:03:23.149 CC lib/json/json_write.o 00:03:23.149 CC lib/env_dpdk/env.o 00:03:23.149 CC lib/env_dpdk/memory.o 00:03:23.149 CC lib/env_dpdk/pci.o 00:03:23.149 CC lib/env_dpdk/init.o 00:03:23.149 CC lib/idxd/idxd.o 00:03:23.149 CC lib/env_dpdk/pci_ioat.o 00:03:23.149 CC lib/env_dpdk/threads.o 00:03:23.149 CC lib/conf/conf.o 00:03:23.149 CC lib/idxd/idxd_user.o 00:03:23.149 CC lib/idxd/idxd_kernel.o 00:03:23.149 CC lib/env_dpdk/pci_virtio.o 00:03:23.149 CC lib/vmd/vmd.o 00:03:23.149 CC lib/env_dpdk/pci_vmd.o 00:03:23.149 CC lib/rdma_utils/rdma_utils.o 00:03:23.149 CC lib/vmd/led.o 00:03:23.149 CC lib/env_dpdk/pci_idxd.o 00:03:23.149 CC lib/env_dpdk/pci_event.o 00:03:23.149 CC lib/env_dpdk/sigbus_handler.o 00:03:23.149 CC lib/env_dpdk/pci_dpdk.o 00:03:23.149 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:23.149 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.149 LIB libspdk_conf.a 00:03:23.149 SO libspdk_conf.so.6.0 00:03:23.149 LIB libspdk_json.a 00:03:23.149 LIB libspdk_rdma_utils.a 00:03:23.149 SO libspdk_rdma_utils.so.1.0 00:03:23.149 SYMLINK libspdk_conf.so 00:03:23.149 SO libspdk_json.so.6.0 00:03:23.149 SYMLINK libspdk_rdma_utils.so 00:03:23.149 SYMLINK libspdk_json.so 00:03:23.149 LIB libspdk_idxd.a 00:03:23.149 SO libspdk_idxd.so.12.1 00:03:23.149 LIB libspdk_vmd.a 00:03:23.149 LIB libspdk_trace_parser.a 00:03:23.149 SO libspdk_vmd.so.6.0 00:03:23.149 SYMLINK libspdk_idxd.so 00:03:23.149 SO libspdk_trace_parser.so.6.0 00:03:23.149 SYMLINK libspdk_vmd.so 00:03:23.149 SYMLINK libspdk_trace_parser.so 00:03:23.149 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:23.149 CC lib/rdma_provider/common.o 00:03:23.149 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.149 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.149 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.149 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.149 LIB libspdk_rdma_provider.a 00:03:23.149 SO libspdk_rdma_provider.so.7.0 00:03:23.149 LIB libspdk_jsonrpc.a 00:03:23.149 SYMLINK libspdk_rdma_provider.so 00:03:23.149 SO libspdk_jsonrpc.so.6.0 00:03:23.149 LIB libspdk_env_dpdk.a 00:03:23.149 SYMLINK libspdk_jsonrpc.so 00:03:23.149 SO libspdk_env_dpdk.so.15.1 00:03:23.149 SYMLINK libspdk_env_dpdk.so 00:03:23.149 CC lib/rpc/rpc.o 00:03:23.149 LIB libspdk_rpc.a 00:03:23.149 SO libspdk_rpc.so.6.0 00:03:23.149 SYMLINK libspdk_rpc.so 00:03:23.149 CC lib/notify/notify.o 00:03:23.149 CC lib/trace/trace.o 00:03:23.149 CC lib/notify/notify_rpc.o 00:03:23.149 CC lib/trace/trace_flags.o 00:03:23.149 CC lib/trace/trace_rpc.o 00:03:23.149 CC lib/keyring/keyring.o 00:03:23.149 CC lib/keyring/keyring_rpc.o 00:03:23.149 LIB libspdk_notify.a 00:03:23.149 SO libspdk_notify.so.6.0 00:03:23.149 LIB libspdk_keyring.a 00:03:23.150 LIB libspdk_trace.a 00:03:23.150 SO libspdk_keyring.so.2.0 00:03:23.150 SYMLINK libspdk_notify.so 00:03:23.150 SO libspdk_trace.so.11.0 00:03:23.410 SYMLINK libspdk_keyring.so 00:03:23.410 SYMLINK libspdk_trace.so 00:03:23.671 CC lib/thread/thread.o 00:03:23.671 CC lib/thread/iobuf.o 00:03:23.671 CC lib/sock/sock.o 00:03:23.671 CC lib/sock/sock_rpc.o 00:03:23.932 LIB libspdk_sock.a 00:03:24.192 SO libspdk_sock.so.10.0 00:03:24.192 SYMLINK libspdk_sock.so 00:03:24.453 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.453 CC lib/nvme/nvme_ctrlr.o 00:03:24.453 CC lib/nvme/nvme_fabric.o 00:03:24.453 CC lib/nvme/nvme_ns_cmd.o 00:03:24.453 CC lib/nvme/nvme_ns.o 00:03:24.453 CC lib/nvme/nvme_pcie_common.o 00:03:24.453 CC lib/nvme/nvme_pcie.o 00:03:24.453 CC lib/nvme/nvme_qpair.o 00:03:24.453 CC lib/nvme/nvme.o 00:03:24.453 CC lib/nvme/nvme_quirks.o 00:03:24.453 CC lib/nvme/nvme_transport.o 00:03:24.453 CC lib/nvme/nvme_discovery.o 00:03:24.453 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.453 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.453 CC lib/nvme/nvme_tcp.o 00:03:24.453 CC lib/nvme/nvme_opal.o 00:03:24.453 CC lib/nvme/nvme_io_msg.o 00:03:24.453 CC lib/nvme/nvme_poll_group.o 00:03:24.453 CC lib/nvme/nvme_zns.o 00:03:24.453 CC lib/nvme/nvme_stubs.o 00:03:24.453 CC lib/nvme/nvme_auth.o 00:03:24.453 CC lib/nvme/nvme_cuse.o 00:03:24.453 CC lib/nvme/nvme_rdma.o 00:03:24.715 LIB libspdk_thread.a 00:03:24.715 SO libspdk_thread.so.11.0 00:03:24.976 SYMLINK libspdk_thread.so 00:03:25.236 CC lib/blob/zeroes.o 00:03:25.236 CC lib/blob/blobstore.o 00:03:25.236 CC lib/blob/request.o 00:03:25.236 CC lib/blob/blob_bs_dev.o 00:03:25.236 CC lib/accel/accel.o 00:03:25.236 CC lib/accel/accel_rpc.o 00:03:25.236 CC lib/accel/accel_sw.o 00:03:25.236 CC lib/virtio/virtio_vfio_user.o 00:03:25.236 CC lib/virtio/virtio.o 00:03:25.236 CC lib/virtio/virtio_vhost_user.o 00:03:25.236 CC lib/fsdev/fsdev.o 00:03:25.236 CC lib/fsdev/fsdev_io.o 00:03:25.236 CC lib/virtio/virtio_pci.o 00:03:25.236 CC lib/fsdev/fsdev_rpc.o 00:03:25.236 CC lib/init/json_config.o 00:03:25.236 CC lib/init/subsystem_rpc.o 00:03:25.236 CC lib/init/subsystem.o 00:03:25.236 CC lib/init/rpc.o 00:03:25.496 LIB libspdk_init.a 00:03:25.496 SO libspdk_init.so.6.0 00:03:25.496 LIB libspdk_virtio.a 00:03:25.496 SO libspdk_virtio.so.7.0 00:03:25.496 SYMLINK libspdk_init.so 00:03:25.757 SYMLINK libspdk_virtio.so 00:03:25.757 LIB libspdk_fsdev.a 00:03:25.757 SO libspdk_fsdev.so.2.0 00:03:25.757 SYMLINK libspdk_fsdev.so 00:03:26.018 CC lib/event/app.o 00:03:26.018 CC lib/event/reactor.o 00:03:26.018 CC lib/event/log_rpc.o 00:03:26.018 CC lib/event/app_rpc.o 00:03:26.018 CC lib/event/scheduler_static.o 00:03:26.018 LIB libspdk_accel.a 00:03:26.018 SO libspdk_accel.so.16.0 00:03:26.018 SYMLINK libspdk_accel.so 00:03:26.279 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.279 LIB libspdk_nvme.a 00:03:26.279 LIB libspdk_event.a 00:03:26.279 SO libspdk_nvme.so.15.0 00:03:26.279 SO libspdk_event.so.14.0 00:03:26.539 SYMLINK libspdk_event.so 00:03:26.540 CC lib/bdev/bdev.o 00:03:26.540 CC lib/bdev/bdev_rpc.o 00:03:26.540 CC lib/bdev/bdev_zone.o 00:03:26.540 CC lib/bdev/part.o 00:03:26.540 CC lib/bdev/scsi_nvme.o 00:03:26.540 SYMLINK libspdk_nvme.so 00:03:26.540 LIB libspdk_fuse_dispatcher.a 00:03:26.801 SO libspdk_fuse_dispatcher.so.1.0 00:03:26.801 SYMLINK libspdk_fuse_dispatcher.so 00:03:27.372 LIB libspdk_blob.a 00:03:27.372 SO libspdk_blob.so.11.0 00:03:27.372 SYMLINK libspdk_blob.so 00:03:27.943 CC lib/blobfs/blobfs.o 00:03:27.943 CC lib/blobfs/tree.o 00:03:27.943 CC lib/lvol/lvol.o 00:03:28.204 LIB libspdk_bdev.a 00:03:28.464 LIB libspdk_blobfs.a 00:03:28.464 SO libspdk_bdev.so.17.0 00:03:28.464 SO libspdk_blobfs.so.10.0 00:03:28.464 LIB libspdk_lvol.a 00:03:28.464 SYMLINK libspdk_bdev.so 00:03:28.464 SYMLINK libspdk_blobfs.so 00:03:28.464 SO libspdk_lvol.so.10.0 00:03:28.464 SYMLINK libspdk_lvol.so 00:03:28.726 CC lib/nbd/nbd.o 00:03:28.726 CC lib/nbd/nbd_rpc.o 00:03:28.726 CC lib/ftl/ftl_core.o 00:03:28.726 CC lib/ublk/ublk.o 00:03:28.726 CC lib/ftl/ftl_init.o 00:03:28.726 CC lib/ublk/ublk_rpc.o 00:03:28.726 CC lib/ftl/ftl_layout.o 00:03:28.726 CC lib/ftl/ftl_debug.o 00:03:28.726 CC lib/nvmf/ctrlr.o 00:03:28.726 CC lib/scsi/dev.o 00:03:28.726 CC lib/ftl/ftl_io.o 00:03:28.726 CC lib/nvmf/ctrlr_discovery.o 00:03:28.726 CC lib/ftl/ftl_l2p.o 00:03:28.726 CC lib/ftl/ftl_sb.o 00:03:28.726 CC lib/scsi/lun.o 00:03:28.726 CC lib/nvmf/ctrlr_bdev.o 00:03:28.726 CC lib/scsi/port.o 00:03:28.726 CC lib/ftl/ftl_l2p_flat.o 00:03:28.726 CC lib/nvmf/nvmf.o 00:03:28.726 CC lib/ftl/ftl_nv_cache.o 00:03:28.726 CC lib/scsi/scsi.o 00:03:28.726 CC lib/nvmf/subsystem.o 00:03:28.726 CC lib/ftl/ftl_band.o 00:03:28.726 CC lib/scsi/scsi_bdev.o 00:03:28.726 CC lib/ftl/ftl_band_ops.o 00:03:28.726 CC lib/scsi/scsi_pr.o 00:03:28.726 CC lib/nvmf/nvmf_rpc.o 00:03:28.726 CC lib/ftl/ftl_writer.o 00:03:28.726 CC lib/ftl/ftl_rq.o 00:03:28.726 CC lib/scsi/scsi_rpc.o 00:03:28.726 CC lib/scsi/task.o 00:03:28.726 CC lib/nvmf/transport.o 00:03:28.726 CC lib/ftl/ftl_reloc.o 00:03:28.726 CC lib/nvmf/tcp.o 00:03:28.726 CC lib/ftl/ftl_p2l_log.o 00:03:28.726 CC lib/ftl/ftl_l2p_cache.o 00:03:28.726 CC lib/nvmf/stubs.o 00:03:28.726 CC lib/ftl/ftl_p2l.o 00:03:28.726 CC lib/nvmf/mdns_server.o 00:03:28.726 CC lib/nvmf/rdma.o 00:03:28.726 CC lib/nvmf/auth.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.985 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.985 CC lib/ftl/utils/ftl_conf.o 00:03:28.985 CC lib/ftl/utils/ftl_mempool.o 00:03:28.985 CC lib/ftl/utils/ftl_md.o 00:03:28.985 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.985 CC lib/ftl/utils/ftl_property.o 00:03:28.985 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.985 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.985 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.986 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.986 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.986 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.986 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.986 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.986 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.986 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.986 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.986 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:28.986 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:28.986 CC lib/ftl/base/ftl_base_dev.o 00:03:28.986 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.986 CC lib/ftl/ftl_trace.o 00:03:29.246 LIB libspdk_nbd.a 00:03:29.246 SO libspdk_nbd.so.7.0 00:03:29.507 SYMLINK libspdk_nbd.so 00:03:29.507 LIB libspdk_scsi.a 00:03:29.507 SO libspdk_scsi.so.9.0 00:03:29.507 SYMLINK libspdk_scsi.so 00:03:29.507 LIB libspdk_ublk.a 00:03:29.767 SO libspdk_ublk.so.3.0 00:03:29.767 SYMLINK libspdk_ublk.so 00:03:29.767 LIB libspdk_ftl.a 00:03:30.026 SO libspdk_ftl.so.9.0 00:03:30.026 CC lib/iscsi/conn.o 00:03:30.026 CC lib/iscsi/init_grp.o 00:03:30.026 CC lib/iscsi/iscsi.o 00:03:30.026 CC lib/iscsi/param.o 00:03:30.026 CC lib/vhost/vhost_rpc.o 00:03:30.026 CC lib/iscsi/tgt_node.o 00:03:30.026 CC lib/iscsi/portal_grp.o 00:03:30.026 CC lib/vhost/vhost.o 00:03:30.026 CC lib/iscsi/iscsi_subsystem.o 00:03:30.026 CC lib/vhost/vhost_scsi.o 00:03:30.026 CC lib/iscsi/iscsi_rpc.o 00:03:30.026 CC lib/iscsi/task.o 00:03:30.026 CC lib/vhost/vhost_blk.o 00:03:30.026 CC lib/vhost/rte_vhost_user.o 00:03:30.286 SYMLINK libspdk_ftl.so 00:03:30.545 LIB libspdk_nvmf.a 00:03:30.805 SO libspdk_nvmf.so.20.0 00:03:30.805 LIB libspdk_vhost.a 00:03:30.805 SO libspdk_vhost.so.8.0 00:03:30.805 SYMLINK libspdk_nvmf.so 00:03:30.805 SYMLINK libspdk_vhost.so 00:03:31.066 LIB libspdk_iscsi.a 00:03:31.066 SO libspdk_iscsi.so.8.0 00:03:31.066 SYMLINK libspdk_iscsi.so 00:03:31.637 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.898 CC module/accel/error/accel_error.o 00:03:31.898 CC module/accel/error/accel_error_rpc.o 00:03:31.898 CC module/accel/iaa/accel_iaa.o 00:03:31.898 LIB libspdk_env_dpdk_rpc.a 00:03:31.898 CC module/accel/ioat/accel_ioat.o 00:03:31.898 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.898 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.898 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.898 CC module/sock/posix/posix.o 00:03:31.898 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.898 CC module/accel/dsa/accel_dsa.o 00:03:31.898 CC module/fsdev/aio/fsdev_aio.o 00:03:31.898 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.898 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.898 CC module/blob/bdev/blob_bdev.o 00:03:31.898 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.898 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.898 CC module/keyring/linux/keyring.o 00:03:31.898 CC module/keyring/linux/keyring_rpc.o 00:03:31.898 CC module/keyring/file/keyring.o 00:03:31.898 CC module/keyring/file/keyring_rpc.o 00:03:31.898 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.159 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.159 LIB libspdk_scheduler_gscheduler.a 00:03:32.159 LIB libspdk_accel_error.a 00:03:32.159 LIB libspdk_keyring_linux.a 00:03:32.159 LIB libspdk_keyring_file.a 00:03:32.160 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.160 LIB libspdk_accel_ioat.a 00:03:32.160 LIB libspdk_accel_iaa.a 00:03:32.160 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.160 SO libspdk_keyring_linux.so.1.0 00:03:32.160 SO libspdk_accel_error.so.2.0 00:03:32.160 SO libspdk_keyring_file.so.2.0 00:03:32.160 LIB libspdk_scheduler_dynamic.a 00:03:32.160 SO libspdk_accel_ioat.so.6.0 00:03:32.160 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.160 SO libspdk_accel_iaa.so.3.0 00:03:32.160 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.160 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.160 LIB libspdk_accel_dsa.a 00:03:32.160 SYMLINK libspdk_keyring_linux.so 00:03:32.160 LIB libspdk_blob_bdev.a 00:03:32.160 SYMLINK libspdk_keyring_file.so 00:03:32.160 SYMLINK libspdk_accel_error.so 00:03:32.160 SYMLINK libspdk_accel_ioat.so 00:03:32.160 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.160 SO libspdk_accel_dsa.so.5.0 00:03:32.160 SYMLINK libspdk_accel_iaa.so 00:03:32.160 SO libspdk_blob_bdev.so.11.0 00:03:32.160 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.421 SYMLINK libspdk_blob_bdev.so 00:03:32.421 SYMLINK libspdk_accel_dsa.so 00:03:32.421 LIB libspdk_fsdev_aio.a 00:03:32.421 LIB libspdk_sock_posix.a 00:03:32.421 SO libspdk_fsdev_aio.so.1.0 00:03:32.682 SO libspdk_sock_posix.so.6.0 00:03:32.682 SYMLINK libspdk_fsdev_aio.so 00:03:32.682 SYMLINK libspdk_sock_posix.so 00:03:32.942 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.942 CC module/bdev/aio/bdev_aio.o 00:03:32.942 CC module/bdev/delay/vbdev_delay.o 00:03:32.942 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.942 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.942 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.942 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.942 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.942 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.942 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.942 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.942 CC module/bdev/gpt/gpt.o 00:03:32.942 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.942 CC module/bdev/error/vbdev_error.o 00:03:32.942 CC module/bdev/split/vbdev_split.o 00:03:32.942 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.942 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.943 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.943 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.943 CC module/bdev/malloc/bdev_malloc.o 00:03:32.943 CC module/bdev/raid/bdev_raid.o 00:03:32.943 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.943 CC module/bdev/ftl/bdev_ftl.o 00:03:32.943 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.943 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.943 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.943 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.943 CC module/bdev/raid/raid0.o 00:03:32.943 CC module/bdev/raid/raid1.o 00:03:32.943 CC module/bdev/raid/concat.o 00:03:32.943 CC module/bdev/null/bdev_null.o 00:03:32.943 CC module/bdev/nvme/bdev_nvme.o 00:03:32.943 CC module/bdev/null/bdev_null_rpc.o 00:03:32.943 CC module/bdev/nvme/nvme_rpc.o 00:03:32.943 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.943 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.943 CC module/bdev/nvme/vbdev_opal.o 00:03:32.943 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.943 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.943 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.943 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.943 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.943 LIB libspdk_blobfs_bdev.a 00:03:33.203 SO libspdk_blobfs_bdev.so.6.0 00:03:33.203 LIB libspdk_bdev_split.a 00:03:33.203 SO libspdk_bdev_split.so.6.0 00:03:33.203 LIB libspdk_bdev_error.a 00:03:33.203 LIB libspdk_bdev_null.a 00:03:33.203 SYMLINK libspdk_blobfs_bdev.so 00:03:33.203 LIB libspdk_bdev_passthru.a 00:03:33.203 LIB libspdk_bdev_gpt.a 00:03:33.203 LIB libspdk_bdev_ftl.a 00:03:33.203 SO libspdk_bdev_null.so.6.0 00:03:33.203 SO libspdk_bdev_error.so.6.0 00:03:33.203 SO libspdk_bdev_gpt.so.6.0 00:03:33.203 SO libspdk_bdev_passthru.so.6.0 00:03:33.203 LIB libspdk_bdev_aio.a 00:03:33.203 SO libspdk_bdev_ftl.so.6.0 00:03:33.203 SYMLINK libspdk_bdev_split.so 00:03:33.203 LIB libspdk_bdev_zone_block.a 00:03:33.203 LIB libspdk_bdev_delay.a 00:03:33.203 LIB libspdk_bdev_malloc.a 00:03:33.203 SO libspdk_bdev_aio.so.6.0 00:03:33.203 SO libspdk_bdev_zone_block.so.6.0 00:03:33.203 SYMLINK libspdk_bdev_null.so 00:03:33.203 LIB libspdk_bdev_iscsi.a 00:03:33.203 SYMLINK libspdk_bdev_gpt.so 00:03:33.203 SYMLINK libspdk_bdev_error.so 00:03:33.203 SYMLINK libspdk_bdev_passthru.so 00:03:33.203 SYMLINK libspdk_bdev_ftl.so 00:03:33.203 SO libspdk_bdev_delay.so.6.0 00:03:33.203 SO libspdk_bdev_malloc.so.6.0 00:03:33.203 SO libspdk_bdev_iscsi.so.6.0 00:03:33.203 SYMLINK libspdk_bdev_aio.so 00:03:33.203 SYMLINK libspdk_bdev_zone_block.so 00:03:33.463 LIB libspdk_bdev_lvol.a 00:03:33.463 SYMLINK libspdk_bdev_delay.so 00:03:33.463 SYMLINK libspdk_bdev_malloc.so 00:03:33.463 LIB libspdk_bdev_virtio.a 00:03:33.463 SYMLINK libspdk_bdev_iscsi.so 00:03:33.463 SO libspdk_bdev_virtio.so.6.0 00:03:33.463 SO libspdk_bdev_lvol.so.6.0 00:03:33.463 SYMLINK libspdk_bdev_virtio.so 00:03:33.463 SYMLINK libspdk_bdev_lvol.so 00:03:33.724 LIB libspdk_bdev_raid.a 00:03:33.724 SO libspdk_bdev_raid.so.6.0 00:03:33.724 SYMLINK libspdk_bdev_raid.so 00:03:34.666 LIB libspdk_bdev_nvme.a 00:03:34.666 SO libspdk_bdev_nvme.so.7.1 00:03:34.927 SYMLINK libspdk_bdev_nvme.so 00:03:35.498 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.498 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.498 CC module/event/subsystems/vmd/vmd.o 00:03:35.498 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.498 CC module/event/subsystems/sock/sock.o 00:03:35.498 CC module/event/subsystems/fsdev/fsdev.o 00:03:35.498 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.498 CC module/event/subsystems/keyring/keyring.o 00:03:35.498 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.758 LIB libspdk_event_fsdev.a 00:03:35.758 LIB libspdk_event_keyring.a 00:03:35.758 LIB libspdk_event_vmd.a 00:03:35.758 LIB libspdk_event_iobuf.a 00:03:35.758 LIB libspdk_event_vhost_blk.a 00:03:35.758 LIB libspdk_event_scheduler.a 00:03:35.758 LIB libspdk_event_sock.a 00:03:35.758 SO libspdk_event_fsdev.so.1.0 00:03:35.758 SO libspdk_event_keyring.so.1.0 00:03:35.758 SO libspdk_event_vmd.so.6.0 00:03:35.758 SO libspdk_event_iobuf.so.3.0 00:03:35.758 SO libspdk_event_vhost_blk.so.3.0 00:03:35.758 SO libspdk_event_sock.so.5.0 00:03:35.758 SO libspdk_event_scheduler.so.4.0 00:03:35.758 SYMLINK libspdk_event_fsdev.so 00:03:35.758 SYMLINK libspdk_event_keyring.so 00:03:35.758 SYMLINK libspdk_event_iobuf.so 00:03:35.758 SYMLINK libspdk_event_sock.so 00:03:35.758 SYMLINK libspdk_event_scheduler.so 00:03:35.758 SYMLINK libspdk_event_vhost_blk.so 00:03:35.758 SYMLINK libspdk_event_vmd.so 00:03:36.330 CC module/event/subsystems/accel/accel.o 00:03:36.330 LIB libspdk_event_accel.a 00:03:36.330 SO libspdk_event_accel.so.6.0 00:03:36.330 SYMLINK libspdk_event_accel.so 00:03:36.901 CC module/event/subsystems/bdev/bdev.o 00:03:36.901 LIB libspdk_event_bdev.a 00:03:36.901 SO libspdk_event_bdev.so.6.0 00:03:37.162 SYMLINK libspdk_event_bdev.so 00:03:37.422 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.422 CC module/event/subsystems/scsi/scsi.o 00:03:37.422 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.422 CC module/event/subsystems/nbd/nbd.o 00:03:37.422 CC module/event/subsystems/ublk/ublk.o 00:03:37.682 LIB libspdk_event_ublk.a 00:03:37.682 LIB libspdk_event_nbd.a 00:03:37.682 LIB libspdk_event_scsi.a 00:03:37.682 SO libspdk_event_ublk.so.3.0 00:03:37.682 SO libspdk_event_nbd.so.6.0 00:03:37.682 SO libspdk_event_scsi.so.6.0 00:03:37.682 LIB libspdk_event_nvmf.a 00:03:37.682 SYMLINK libspdk_event_ublk.so 00:03:37.682 SO libspdk_event_nvmf.so.6.0 00:03:37.682 SYMLINK libspdk_event_nbd.so 00:03:37.682 SYMLINK libspdk_event_scsi.so 00:03:37.682 SYMLINK libspdk_event_nvmf.so 00:03:38.254 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.254 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.254 LIB libspdk_event_vhost_scsi.a 00:03:38.254 LIB libspdk_event_iscsi.a 00:03:38.254 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.254 SO libspdk_event_iscsi.so.6.0 00:03:38.254 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.254 SYMLINK libspdk_event_iscsi.so 00:03:38.515 SO libspdk.so.6.0 00:03:38.515 SYMLINK libspdk.so 00:03:39.100 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.100 CC app/spdk_lspci/spdk_lspci.o 00:03:39.100 CC app/spdk_nvme_identify/identify.o 00:03:39.100 CC app/spdk_top/spdk_top.o 00:03:39.100 CC test/rpc_client/rpc_client_test.o 00:03:39.100 CC app/spdk_nvme_perf/perf.o 00:03:39.100 CC app/trace_record/trace_record.o 00:03:39.100 CXX app/trace/trace.o 00:03:39.100 TEST_HEADER include/spdk/accel.h 00:03:39.100 TEST_HEADER include/spdk/accel_module.h 00:03:39.100 TEST_HEADER include/spdk/assert.h 00:03:39.100 TEST_HEADER include/spdk/barrier.h 00:03:39.100 TEST_HEADER include/spdk/bdev_module.h 00:03:39.100 TEST_HEADER include/spdk/bdev.h 00:03:39.100 TEST_HEADER include/spdk/base64.h 00:03:39.100 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.100 TEST_HEADER include/spdk/bit_array.h 00:03:39.100 TEST_HEADER include/spdk/bit_pool.h 00:03:39.100 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.100 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.100 TEST_HEADER include/spdk/blobfs.h 00:03:39.100 TEST_HEADER include/spdk/conf.h 00:03:39.100 TEST_HEADER include/spdk/blob.h 00:03:39.100 TEST_HEADER include/spdk/config.h 00:03:39.100 TEST_HEADER include/spdk/cpuset.h 00:03:39.100 TEST_HEADER include/spdk/crc16.h 00:03:39.100 TEST_HEADER include/spdk/crc32.h 00:03:39.100 TEST_HEADER include/spdk/crc64.h 00:03:39.100 TEST_HEADER include/spdk/dma.h 00:03:39.100 TEST_HEADER include/spdk/endian.h 00:03:39.100 TEST_HEADER include/spdk/dif.h 00:03:39.100 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.100 TEST_HEADER include/spdk/env.h 00:03:39.100 TEST_HEADER include/spdk/fd_group.h 00:03:39.100 TEST_HEADER include/spdk/fd.h 00:03:39.100 TEST_HEADER include/spdk/event.h 00:03:39.100 TEST_HEADER include/spdk/file.h 00:03:39.100 TEST_HEADER include/spdk/fsdev.h 00:03:39.100 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.100 TEST_HEADER include/spdk/ftl.h 00:03:39.100 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.100 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.100 TEST_HEADER include/spdk/hexlify.h 00:03:39.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.100 TEST_HEADER include/spdk/histogram_data.h 00:03:39.100 CC app/spdk_dd/spdk_dd.o 00:03:39.100 TEST_HEADER include/spdk/idxd.h 00:03:39.100 TEST_HEADER include/spdk/init.h 00:03:39.100 TEST_HEADER include/spdk/ioat.h 00:03:39.100 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.100 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.100 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.101 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.101 TEST_HEADER include/spdk/json.h 00:03:39.101 TEST_HEADER include/spdk/keyring.h 00:03:39.101 TEST_HEADER include/spdk/likely.h 00:03:39.101 TEST_HEADER include/spdk/keyring_module.h 00:03:39.101 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.101 TEST_HEADER include/spdk/log.h 00:03:39.101 CC app/nvmf_tgt/nvmf_main.o 00:03:39.101 TEST_HEADER include/spdk/md5.h 00:03:39.101 TEST_HEADER include/spdk/memory.h 00:03:39.101 TEST_HEADER include/spdk/lvol.h 00:03:39.101 TEST_HEADER include/spdk/mmio.h 00:03:39.101 TEST_HEADER include/spdk/nbd.h 00:03:39.101 TEST_HEADER include/spdk/net.h 00:03:39.101 TEST_HEADER include/spdk/notify.h 00:03:39.101 TEST_HEADER include/spdk/nvme.h 00:03:39.101 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.101 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.101 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.101 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.101 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.101 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.101 TEST_HEADER include/spdk/nvmf.h 00:03:39.101 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.101 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.101 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.101 CC app/spdk_tgt/spdk_tgt.o 00:03:39.101 TEST_HEADER include/spdk/opal.h 00:03:39.101 TEST_HEADER include/spdk/pci_ids.h 00:03:39.101 TEST_HEADER include/spdk/opal_spec.h 00:03:39.101 TEST_HEADER include/spdk/pipe.h 00:03:39.101 TEST_HEADER include/spdk/reduce.h 00:03:39.101 TEST_HEADER include/spdk/queue.h 00:03:39.101 TEST_HEADER include/spdk/rpc.h 00:03:39.101 TEST_HEADER include/spdk/scheduler.h 00:03:39.101 TEST_HEADER include/spdk/sock.h 00:03:39.101 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.101 TEST_HEADER include/spdk/scsi.h 00:03:39.101 TEST_HEADER include/spdk/string.h 00:03:39.101 TEST_HEADER include/spdk/thread.h 00:03:39.101 TEST_HEADER include/spdk/trace.h 00:03:39.101 TEST_HEADER include/spdk/stdinc.h 00:03:39.101 TEST_HEADER include/spdk/trace_parser.h 00:03:39.101 TEST_HEADER include/spdk/util.h 00:03:39.101 TEST_HEADER include/spdk/tree.h 00:03:39.101 TEST_HEADER include/spdk/uuid.h 00:03:39.101 TEST_HEADER include/spdk/ublk.h 00:03:39.101 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.101 TEST_HEADER include/spdk/version.h 00:03:39.101 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.101 TEST_HEADER include/spdk/vhost.h 00:03:39.101 TEST_HEADER include/spdk/vmd.h 00:03:39.101 TEST_HEADER include/spdk/zipf.h 00:03:39.101 TEST_HEADER include/spdk/xor.h 00:03:39.101 CXX test/cpp_headers/accel_module.o 00:03:39.101 CXX test/cpp_headers/assert.o 00:03:39.101 CXX test/cpp_headers/accel.o 00:03:39.101 CXX test/cpp_headers/base64.o 00:03:39.101 CXX test/cpp_headers/barrier.o 00:03:39.101 CXX test/cpp_headers/bdev.o 00:03:39.101 CXX test/cpp_headers/bdev_module.o 00:03:39.101 CXX test/cpp_headers/bit_array.o 00:03:39.101 CXX test/cpp_headers/blob_bdev.o 00:03:39.101 CXX test/cpp_headers/bdev_zone.o 00:03:39.101 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.101 CXX test/cpp_headers/bit_pool.o 00:03:39.101 CXX test/cpp_headers/blobfs.o 00:03:39.101 CXX test/cpp_headers/config.o 00:03:39.101 CXX test/cpp_headers/blob.o 00:03:39.101 CXX test/cpp_headers/conf.o 00:03:39.101 CXX test/cpp_headers/crc16.o 00:03:39.101 CXX test/cpp_headers/cpuset.o 00:03:39.101 CXX test/cpp_headers/crc32.o 00:03:39.101 CXX test/cpp_headers/crc64.o 00:03:39.101 CXX test/cpp_headers/dif.o 00:03:39.101 CXX test/cpp_headers/env_dpdk.o 00:03:39.101 CXX test/cpp_headers/dma.o 00:03:39.101 CXX test/cpp_headers/env.o 00:03:39.101 CXX test/cpp_headers/endian.o 00:03:39.101 CXX test/cpp_headers/fd_group.o 00:03:39.101 CXX test/cpp_headers/event.o 00:03:39.101 CXX test/cpp_headers/fd.o 00:03:39.101 CXX test/cpp_headers/file.o 00:03:39.101 CXX test/cpp_headers/fsdev_module.o 00:03:39.101 CXX test/cpp_headers/fsdev.o 00:03:39.101 CXX test/cpp_headers/ftl.o 00:03:39.101 CXX test/cpp_headers/fuse_dispatcher.o 00:03:39.101 CXX test/cpp_headers/hexlify.o 00:03:39.101 CXX test/cpp_headers/gpt_spec.o 00:03:39.101 CXX test/cpp_headers/idxd.o 00:03:39.101 CXX test/cpp_headers/init.o 00:03:39.101 CXX test/cpp_headers/idxd_spec.o 00:03:39.101 CXX test/cpp_headers/histogram_data.o 00:03:39.101 CXX test/cpp_headers/iscsi_spec.o 00:03:39.101 CXX test/cpp_headers/ioat.o 00:03:39.101 CXX test/cpp_headers/ioat_spec.o 00:03:39.101 CXX test/cpp_headers/jsonrpc.o 00:03:39.101 CXX test/cpp_headers/json.o 00:03:39.101 CXX test/cpp_headers/keyring_module.o 00:03:39.101 CXX test/cpp_headers/keyring.o 00:03:39.101 CXX test/cpp_headers/log.o 00:03:39.101 CXX test/cpp_headers/likely.o 00:03:39.101 CXX test/cpp_headers/md5.o 00:03:39.101 CXX test/cpp_headers/lvol.o 00:03:39.101 CXX test/cpp_headers/mmio.o 00:03:39.101 CXX test/cpp_headers/memory.o 00:03:39.101 CXX test/cpp_headers/nbd.o 00:03:39.101 CXX test/cpp_headers/notify.o 00:03:39.101 CXX test/cpp_headers/nvme.o 00:03:39.101 CXX test/cpp_headers/net.o 00:03:39.101 CXX test/cpp_headers/nvme_intel.o 00:03:39.101 CXX test/cpp_headers/nvme_zns.o 00:03:39.101 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.101 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.101 CXX test/cpp_headers/nvme_spec.o 00:03:39.101 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.101 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.101 CXX test/cpp_headers/nvmf.o 00:03:39.101 CXX test/cpp_headers/nvmf_spec.o 00:03:39.101 CXX test/cpp_headers/opal.o 00:03:39.101 CXX test/cpp_headers/nvmf_transport.o 00:03:39.101 CXX test/cpp_headers/opal_spec.o 00:03:39.101 CXX test/cpp_headers/pci_ids.o 00:03:39.101 CXX test/cpp_headers/queue.o 00:03:39.101 CXX test/cpp_headers/pipe.o 00:03:39.101 CXX test/cpp_headers/reduce.o 00:03:39.101 CXX test/cpp_headers/rpc.o 00:03:39.101 CXX test/cpp_headers/scheduler.o 00:03:39.101 CXX test/cpp_headers/scsi.o 00:03:39.101 CXX test/cpp_headers/scsi_spec.o 00:03:39.101 CXX test/cpp_headers/sock.o 00:03:39.101 CXX test/cpp_headers/stdinc.o 00:03:39.101 CXX test/cpp_headers/string.o 00:03:39.101 CXX test/cpp_headers/thread.o 00:03:39.101 CXX test/cpp_headers/trace.o 00:03:39.101 CXX test/cpp_headers/trace_parser.o 00:03:39.101 CXX test/cpp_headers/tree.o 00:03:39.101 CC examples/util/zipf/zipf.o 00:03:39.101 CC test/app/stub/stub.o 00:03:39.101 CC test/thread/poller_perf/poller_perf.o 00:03:39.101 CC test/app/histogram_perf/histogram_perf.o 00:03:39.101 CC examples/ioat/verify/verify.o 00:03:39.101 CC test/app/jsoncat/jsoncat.o 00:03:39.101 CC test/env/vtophys/vtophys.o 00:03:39.101 CC test/env/memory/memory_ut.o 00:03:39.101 CC examples/ioat/perf/perf.o 00:03:39.101 CC test/env/pci/pci_ut.o 00:03:39.101 CC app/fio/nvme/fio_plugin.o 00:03:39.101 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.391 CC test/app/bdev_svc/bdev_svc.o 00:03:39.391 LINK spdk_lspci 00:03:39.391 CXX test/cpp_headers/ublk.o 00:03:39.391 CC test/dma/test_dma/test_dma.o 00:03:39.391 CC app/fio/bdev/fio_plugin.o 00:03:39.391 LINK spdk_nvme_discover 00:03:39.391 LINK rpc_client_test 00:03:39.668 LINK interrupt_tgt 00:03:39.668 LINK nvmf_tgt 00:03:39.668 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.668 LINK iscsi_tgt 00:03:39.668 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.668 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.668 LINK spdk_tgt 00:03:39.929 LINK histogram_perf 00:03:39.929 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.929 LINK vtophys 00:03:39.929 LINK poller_perf 00:03:39.929 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.929 CXX test/cpp_headers/util.o 00:03:39.929 LINK env_dpdk_post_init 00:03:39.929 CXX test/cpp_headers/uuid.o 00:03:39.929 LINK spdk_trace_record 00:03:39.929 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.929 CXX test/cpp_headers/version.o 00:03:39.929 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.929 CXX test/cpp_headers/vhost.o 00:03:39.929 CXX test/cpp_headers/vmd.o 00:03:39.929 CXX test/cpp_headers/xor.o 00:03:39.929 CXX test/cpp_headers/zipf.o 00:03:39.929 LINK jsoncat 00:03:39.929 LINK zipf 00:03:39.929 LINK bdev_svc 00:03:39.929 LINK verify 00:03:39.929 LINK ioat_perf 00:03:39.929 LINK stub 00:03:39.929 LINK spdk_dd 00:03:39.929 LINK mem_callbacks 00:03:40.189 LINK spdk_trace 00:03:40.189 LINK pci_ut 00:03:40.189 LINK test_dma 00:03:40.189 LINK nvme_fuzz 00:03:40.189 LINK vhost_fuzz 00:03:40.189 LINK spdk_nvme_perf 00:03:40.189 LINK spdk_nvme_identify 00:03:40.449 LINK spdk_bdev 00:03:40.449 LINK spdk_nvme 00:03:40.449 CC test/event/event_perf/event_perf.o 00:03:40.449 CC test/event/reactor_perf/reactor_perf.o 00:03:40.449 CC test/event/reactor/reactor.o 00:03:40.449 LINK spdk_top 00:03:40.449 CC test/event/app_repeat/app_repeat.o 00:03:40.449 CC test/event/scheduler/scheduler.o 00:03:40.449 LINK memory_ut 00:03:40.449 CC examples/idxd/perf/perf.o 00:03:40.449 CC examples/sock/hello_world/hello_sock.o 00:03:40.449 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.449 CC examples/vmd/led/led.o 00:03:40.449 CC app/vhost/vhost.o 00:03:40.449 CC examples/thread/thread/thread_ex.o 00:03:40.449 LINK reactor_perf 00:03:40.449 LINK event_perf 00:03:40.449 LINK reactor 00:03:40.449 LINK lsvmd 00:03:40.708 LINK app_repeat 00:03:40.708 LINK led 00:03:40.708 LINK scheduler 00:03:40.708 LINK hello_sock 00:03:40.708 LINK vhost 00:03:40.708 LINK idxd_perf 00:03:40.708 CC test/nvme/connect_stress/connect_stress.o 00:03:40.708 CC test/nvme/aer/aer.o 00:03:40.708 CC test/nvme/simple_copy/simple_copy.o 00:03:40.708 CC test/nvme/compliance/nvme_compliance.o 00:03:40.708 CC test/nvme/cuse/cuse.o 00:03:40.708 CC test/nvme/reserve/reserve.o 00:03:40.708 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.708 CC test/nvme/sgl/sgl.o 00:03:40.708 CC test/nvme/startup/startup.o 00:03:40.708 CC test/nvme/e2edp/nvme_dp.o 00:03:40.708 CC test/nvme/overhead/overhead.o 00:03:40.708 CC test/nvme/boot_partition/boot_partition.o 00:03:40.708 CC test/nvme/fdp/fdp.o 00:03:40.708 CC test/nvme/reset/reset.o 00:03:40.708 CC test/nvme/err_injection/err_injection.o 00:03:40.708 LINK thread 00:03:40.708 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.708 CC test/accel/dif/dif.o 00:03:40.708 CC test/blobfs/mkfs/mkfs.o 00:03:40.708 CC test/lvol/esnap/esnap.o 00:03:40.967 LINK boot_partition 00:03:40.967 LINK connect_stress 00:03:40.967 LINK startup 00:03:40.967 LINK reserve 00:03:40.967 LINK fused_ordering 00:03:40.967 LINK doorbell_aers 00:03:40.967 LINK simple_copy 00:03:40.967 LINK err_injection 00:03:40.967 LINK aer 00:03:40.967 LINK mkfs 00:03:40.967 LINK sgl 00:03:40.967 LINK reset 00:03:40.967 LINK overhead 00:03:40.967 LINK nvme_dp 00:03:40.967 LINK nvme_compliance 00:03:40.967 LINK fdp 00:03:41.227 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.227 CC examples/nvme/reconnect/reconnect.o 00:03:41.227 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:41.227 CC examples/nvme/hotplug/hotplug.o 00:03:41.227 CC examples/nvme/abort/abort.o 00:03:41.228 CC examples/nvme/hello_world/hello_world.o 00:03:41.228 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:41.228 CC examples/nvme/arbitration/arbitration.o 00:03:41.228 LINK iscsi_fuzz 00:03:41.228 CC examples/accel/perf/accel_perf.o 00:03:41.228 CC examples/blob/cli/blobcli.o 00:03:41.228 LINK cmb_copy 00:03:41.228 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.228 LINK dif 00:03:41.228 CC examples/blob/hello_world/hello_blob.o 00:03:41.228 LINK pmr_persistence 00:03:41.228 LINK hotplug 00:03:41.228 LINK hello_world 00:03:41.488 LINK reconnect 00:03:41.488 LINK arbitration 00:03:41.488 LINK abort 00:03:41.488 LINK nvme_manage 00:03:41.488 LINK hello_blob 00:03:41.488 LINK hello_fsdev 00:03:41.749 LINK accel_perf 00:03:41.749 LINK blobcli 00:03:41.749 LINK cuse 00:03:41.749 CC test/bdev/bdevio/bdevio.o 00:03:42.319 LINK bdevio 00:03:42.319 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.319 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.579 LINK hello_bdev 00:03:42.840 LINK bdevperf 00:03:43.415 CC examples/nvmf/nvmf/nvmf.o 00:03:43.676 LINK nvmf 00:03:44.248 LINK esnap 00:03:44.509 00:03:44.509 real 0m54.158s 00:03:44.509 user 6m10.563s 00:03:44.509 sys 2m59.022s 00:03:44.509 04:20:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:44.509 04:20:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:44.509 ************************************ 00:03:44.509 END TEST make 00:03:44.509 ************************************ 00:03:44.770 04:20:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:44.770 04:20:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:44.770 04:20:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:44.770 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.770 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:44.770 04:20:23 -- pm/common@44 -- $ pid=6816 00:03:44.770 04:20:23 -- pm/common@50 -- $ kill -TERM 6816 00:03:44.770 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.770 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:44.770 04:20:23 -- pm/common@44 -- $ pid=6818 00:03:44.770 04:20:23 -- pm/common@50 -- $ kill -TERM 6818 00:03:44.770 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.770 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:44.770 04:20:23 -- pm/common@44 -- $ pid=6820 00:03:44.770 04:20:23 -- pm/common@50 -- $ kill -TERM 6820 00:03:44.770 04:20:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.770 04:20:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:44.770 04:20:23 -- pm/common@44 -- $ pid=6842 00:03:44.770 04:20:23 -- pm/common@50 -- $ sudo -E kill -TERM 6842 00:03:44.770 04:20:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:44.770 04:20:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:44.770 04:20:23 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.770 04:20:23 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.770 04:20:23 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.770 04:20:23 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.770 04:20:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.770 04:20:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.770 04:20:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.770 04:20:23 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.770 04:20:23 -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.770 04:20:23 -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.770 04:20:23 -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.770 04:20:23 -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.770 04:20:23 -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.770 04:20:23 -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.770 04:20:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.770 04:20:23 -- scripts/common.sh@344 -- # case "$op" in 00:03:44.770 04:20:23 -- scripts/common.sh@345 -- # : 1 00:03:44.770 04:20:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.770 04:20:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.770 04:20:23 -- scripts/common.sh@365 -- # decimal 1 00:03:44.770 04:20:23 -- scripts/common.sh@353 -- # local d=1 00:03:44.770 04:20:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.770 04:20:23 -- scripts/common.sh@355 -- # echo 1 00:03:44.770 04:20:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.770 04:20:23 -- scripts/common.sh@366 -- # decimal 2 00:03:44.770 04:20:23 -- scripts/common.sh@353 -- # local d=2 00:03:44.770 04:20:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.770 04:20:23 -- scripts/common.sh@355 -- # echo 2 00:03:44.770 04:20:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.770 04:20:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.770 04:20:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.770 04:20:23 -- scripts/common.sh@368 -- # return 0 00:03:44.770 04:20:23 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.770 04:20:23 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.770 --rc genhtml_branch_coverage=1 00:03:44.770 --rc genhtml_function_coverage=1 00:03:44.770 --rc genhtml_legend=1 00:03:44.770 --rc geninfo_all_blocks=1 00:03:44.770 --rc geninfo_unexecuted_blocks=1 00:03:44.770 00:03:44.770 ' 00:03:44.770 04:20:23 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.770 --rc genhtml_branch_coverage=1 00:03:44.770 --rc genhtml_function_coverage=1 00:03:44.770 --rc genhtml_legend=1 00:03:44.770 --rc geninfo_all_blocks=1 00:03:44.770 --rc geninfo_unexecuted_blocks=1 00:03:44.770 00:03:44.770 ' 00:03:44.770 04:20:23 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.770 --rc genhtml_branch_coverage=1 00:03:44.770 --rc genhtml_function_coverage=1 00:03:44.770 --rc genhtml_legend=1 00:03:44.770 --rc geninfo_all_blocks=1 00:03:44.770 --rc geninfo_unexecuted_blocks=1 00:03:44.770 00:03:44.770 ' 00:03:44.770 04:20:23 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.771 --rc genhtml_branch_coverage=1 00:03:44.771 --rc genhtml_function_coverage=1 00:03:44.771 --rc genhtml_legend=1 00:03:44.771 --rc geninfo_all_blocks=1 00:03:44.771 --rc geninfo_unexecuted_blocks=1 00:03:44.771 00:03:44.771 ' 00:03:44.771 04:20:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.771 04:20:23 -- nvmf/common.sh@7 -- # uname -s 00:03:44.771 04:20:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.771 04:20:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.771 04:20:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.771 04:20:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.771 04:20:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.771 04:20:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.771 04:20:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.771 04:20:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.771 04:20:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.771 04:20:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.032 04:20:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:45.032 04:20:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:45.032 04:20:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.032 04:20:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.032 04:20:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:45.032 04:20:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.032 04:20:23 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:45.032 04:20:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.032 04:20:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.032 04:20:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.032 04:20:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.032 04:20:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.032 04:20:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.032 04:20:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.032 04:20:23 -- paths/export.sh@5 -- # export PATH 00:03:45.032 04:20:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.032 04:20:23 -- nvmf/common.sh@51 -- # : 0 00:03:45.032 04:20:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.032 04:20:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.032 04:20:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.032 04:20:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.032 04:20:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.032 04:20:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.032 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.032 04:20:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.032 04:20:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.032 04:20:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.032 04:20:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.032 04:20:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.032 04:20:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.032 04:20:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.032 04:20:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:45.032 04:20:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.032 04:20:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:45.032 04:20:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.032 04:20:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.032 04:20:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.032 04:20:23 -- spdk/autotest.sh@48 -- # udevadm_pid=85569 00:03:45.032 04:20:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.032 04:20:23 -- pm/common@17 -- # local monitor 00:03:45.032 04:20:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.032 04:20:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.032 04:20:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.032 04:20:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.032 04:20:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.032 04:20:23 -- pm/common@21 -- # date +%s 00:03:45.032 04:20:23 -- pm/common@21 -- # date +%s 00:03:45.032 04:20:23 -- pm/common@25 -- # sleep 1 00:03:45.032 04:20:23 -- pm/common@21 -- # date +%s 00:03:45.032 04:20:23 -- pm/common@21 -- # date +%s 00:03:45.032 04:20:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731813623 00:03:45.032 04:20:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731813623 00:03:45.032 04:20:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731813623 00:03:45.032 04:20:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731813623 00:03:45.032 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731813623_collect-vmstat.pm.log 00:03:45.032 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731813623_collect-cpu-load.pm.log 00:03:45.032 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731813623_collect-cpu-temp.pm.log 00:03:45.032 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731813623_collect-bmc-pm.bmc.pm.log 00:03:45.975 04:20:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.975 04:20:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.975 04:20:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.975 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:03:45.975 04:20:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.975 04:20:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:45.975 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:03:45.975 04:20:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:45.975 04:20:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:45.975 04:20:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:45.975 04:20:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:45.975 04:20:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:45.975 04:20:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.975 04:20:24 -- common/autotest_common.sh@1457 -- # uname 00:03:45.975 04:20:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:45.975 04:20:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.975 04:20:24 -- common/autotest_common.sh@1477 -- # uname 00:03:46.236 04:20:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:46.236 04:20:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.236 04:20:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:46.236 lcov: LCOV version 1.15 00:03:46.236 04:20:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:04.359 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.359 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.940 04:20:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:10.940 04:20:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.940 04:20:49 -- common/autotest_common.sh@10 -- # set +x 00:04:10.940 04:20:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:10.940 04:20:49 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.241 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:14.241 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:14.502 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:14.502 04:20:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.502 04:20:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:14.502 04:20:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:14.502 04:20:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:14.502 04:20:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.502 04:20:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:14.502 04:20:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:14.502 04:20:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.502 04:20:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.502 04:20:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.502 04:20:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.502 04:20:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.502 04:20:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.502 04:20:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.502 04:20:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.502 No valid GPT data, bailing 00:04:14.762 04:20:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.762 04:20:53 -- scripts/common.sh@394 -- # pt= 00:04:14.762 04:20:53 -- scripts/common.sh@395 -- # return 1 00:04:14.762 04:20:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.762 1+0 records in 00:04:14.763 1+0 records out 00:04:14.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611943 s, 171 MB/s 00:04:14.763 04:20:53 -- spdk/autotest.sh@105 -- # sync 00:04:14.763 04:20:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.763 04:20:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.763 04:20:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:22.899 04:21:00 -- spdk/autotest.sh@111 -- # uname -s 00:04:22.899 04:21:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:22.899 04:21:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:22.899 04:21:00 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:26.200 Hugepages 00:04:26.200 node hugesize free / total 00:04:26.200 node0 1048576kB 0 / 0 00:04:26.200 node0 2048kB 0 / 0 00:04:26.200 node1 1048576kB 0 / 0 00:04:26.200 node1 2048kB 0 / 0 00:04:26.200 00:04:26.200 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.200 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:26.200 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:26.200 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:26.200 04:21:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:26.200 04:21:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:26.200 04:21:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:26.200 04:21:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:29.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.504 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.416 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.416 04:21:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:32.398 04:21:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:32.398 04:21:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:32.399 04:21:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.399 04:21:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.399 04:21:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.399 04:21:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.659 04:21:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.659 04:21:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.659 04:21:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.659 04:21:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:32.659 04:21:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:32.659 04:21:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.960 Waiting for block devices as requested 00:04:35.960 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.221 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.221 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.221 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.482 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.482 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.482 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.742 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.742 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.742 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:37.002 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:37.002 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:37.002 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:37.266 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.266 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.266 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.528 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:37.528 04:21:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.528 04:21:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:37.528 04:21:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:37.528 04:21:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.528 04:21:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.528 04:21:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.528 04:21:16 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:37.528 04:21:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.528 04:21:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.528 04:21:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.528 04:21:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.528 04:21:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.789 04:21:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.789 04:21:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.789 04:21:16 -- common/autotest_common.sh@1543 -- # continue 00:04:37.789 04:21:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:37.789 04:21:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.789 04:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.789 04:21:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:37.789 04:21:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.789 04:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.789 04:21:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.092 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.092 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.352 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.352 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.352 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.352 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.266 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.266 04:21:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.266 04:21:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.266 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.526 04:21:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.526 04:21:22 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:43.526 04:21:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.526 04:21:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:43.526 04:21:22 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:43.526 04:21:22 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:43.526 04:21:22 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.526 04:21:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:43.526 04:21:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.526 04:21:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.526 04:21:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.526 04:21:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.526 04:21:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.526 04:21:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:43.526 04:21:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:43.526 04:21:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.526 04:21:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:43.526 04:21:22 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:43.526 04:21:22 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:43.526 04:21:22 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:43.526 04:21:22 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:43.526 04:21:22 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:43.526 04:21:22 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:43.526 04:21:22 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=101926 00:04:43.526 04:21:22 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.526 04:21:22 -- common/autotest_common.sh@1585 -- # waitforlisten 101926 00:04:43.526 04:21:22 -- common/autotest_common.sh@835 -- # '[' -z 101926 ']' 00:04:43.526 04:21:22 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.526 04:21:22 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.526 04:21:22 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.526 04:21:22 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.526 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.526 [2024-11-17 04:21:22.299816] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:04:43.526 [2024-11-17 04:21:22.299874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101926 ] 00:04:43.787 [2024-11-17 04:21:22.392476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.787 [2024-11-17 04:21:22.415773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.358 04:21:23 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.358 04:21:23 -- common/autotest_common.sh@868 -- # return 0 00:04:44.358 04:21:23 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:44.358 04:21:23 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:44.358 04:21:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:47.657 nvme0n1 00:04:47.657 04:21:26 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:47.657 [2024-11-17 04:21:26.302463] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:47.657 request: 00:04:47.657 { 00:04:47.657 "nvme_ctrlr_name": "nvme0", 00:04:47.657 "password": "test", 00:04:47.657 "method": "bdev_nvme_opal_revert", 00:04:47.657 "req_id": 1 00:04:47.657 } 00:04:47.657 Got JSON-RPC error response 00:04:47.657 response: 00:04:47.657 { 00:04:47.657 "code": -32602, 00:04:47.657 "message": "Invalid parameters" 00:04:47.657 } 00:04:47.657 04:21:26 -- common/autotest_common.sh@1591 -- # true 00:04:47.657 04:21:26 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:47.657 04:21:26 -- common/autotest_common.sh@1595 -- # killprocess 101926 00:04:47.657 04:21:26 -- common/autotest_common.sh@954 -- # '[' -z 101926 ']' 00:04:47.657 04:21:26 -- common/autotest_common.sh@958 -- # kill -0 101926 00:04:47.657 04:21:26 -- common/autotest_common.sh@959 -- # uname 00:04:47.657 04:21:26 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.657 04:21:26 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101926 00:04:47.657 04:21:26 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.657 04:21:26 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.657 04:21:26 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101926' 00:04:47.657 killing process with pid 101926 00:04:47.657 04:21:26 -- common/autotest_common.sh@973 -- # kill 101926 00:04:47.657 04:21:26 -- common/autotest_common.sh@978 -- # wait 101926 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.657 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.658 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:50.464 04:21:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:50.464 04:21:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:50.464 04:21:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.464 04:21:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.464 04:21:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:50.464 04:21:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.465 04:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:50.465 04:21:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:50.465 04:21:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:50.465 04:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.465 04:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.465 04:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:50.465 ************************************ 00:04:50.465 START TEST env 00:04:50.465 ************************************ 00:04:50.465 04:21:28 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:50.465 * Looking for test storage... 00:04:50.465 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:50.465 04:21:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.465 04:21:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.465 04:21:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.465 04:21:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.465 04:21:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.465 04:21:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.465 04:21:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.465 04:21:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.465 04:21:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.465 04:21:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.465 04:21:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.465 04:21:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.465 04:21:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.465 04:21:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.465 04:21:29 env -- scripts/common.sh@344 -- # case "$op" in 00:04:50.465 04:21:29 env -- scripts/common.sh@345 -- # : 1 00:04:50.465 04:21:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.465 04:21:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.465 04:21:29 env -- scripts/common.sh@365 -- # decimal 1 00:04:50.465 04:21:29 env -- scripts/common.sh@353 -- # local d=1 00:04:50.465 04:21:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.465 04:21:29 env -- scripts/common.sh@355 -- # echo 1 00:04:50.465 04:21:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.465 04:21:29 env -- scripts/common.sh@366 -- # decimal 2 00:04:50.465 04:21:29 env -- scripts/common.sh@353 -- # local d=2 00:04:50.465 04:21:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.465 04:21:29 env -- scripts/common.sh@355 -- # echo 2 00:04:50.465 04:21:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.465 04:21:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.465 04:21:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.465 04:21:29 env -- scripts/common.sh@368 -- # return 0 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.465 --rc genhtml_branch_coverage=1 00:04:50.465 --rc genhtml_function_coverage=1 00:04:50.465 --rc genhtml_legend=1 00:04:50.465 --rc geninfo_all_blocks=1 00:04:50.465 --rc geninfo_unexecuted_blocks=1 00:04:50.465 00:04:50.465 ' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.465 --rc genhtml_branch_coverage=1 00:04:50.465 --rc genhtml_function_coverage=1 00:04:50.465 --rc genhtml_legend=1 00:04:50.465 --rc geninfo_all_blocks=1 00:04:50.465 --rc geninfo_unexecuted_blocks=1 00:04:50.465 00:04:50.465 ' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.465 --rc genhtml_branch_coverage=1 00:04:50.465 --rc genhtml_function_coverage=1 00:04:50.465 --rc genhtml_legend=1 00:04:50.465 --rc geninfo_all_blocks=1 00:04:50.465 --rc geninfo_unexecuted_blocks=1 00:04:50.465 00:04:50.465 ' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.465 --rc genhtml_branch_coverage=1 00:04:50.465 --rc genhtml_function_coverage=1 00:04:50.465 --rc genhtml_legend=1 00:04:50.465 --rc geninfo_all_blocks=1 00:04:50.465 --rc geninfo_unexecuted_blocks=1 00:04:50.465 00:04:50.465 ' 00:04:50.465 04:21:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.465 04:21:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.465 ************************************ 00:04:50.465 START TEST env_memory 00:04:50.465 ************************************ 00:04:50.465 04:21:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.465 00:04:50.465 00:04:50.465 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.465 http://cunit.sourceforge.net/ 00:04:50.465 00:04:50.465 00:04:50.465 Suite: memory 00:04:50.465 Test: alloc and free memory map ...[2024-11-17 04:21:29.120277] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:50.465 passed 00:04:50.465 Test: mem map translation ...[2024-11-17 04:21:29.138681] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:50.465 [2024-11-17 04:21:29.138698] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:50.465 [2024-11-17 04:21:29.138732] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:50.465 [2024-11-17 04:21:29.138740] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:50.465 passed 00:04:50.465 Test: mem map registration ...[2024-11-17 04:21:29.175515] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:50.465 [2024-11-17 04:21:29.175532] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:50.465 passed 00:04:50.465 Test: mem map adjacent registrations ...passed 00:04:50.465 00:04:50.465 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.465 suites 1 1 n/a 0 0 00:04:50.465 tests 4 4 4 0 0 00:04:50.465 asserts 152 152 152 0 n/a 00:04:50.465 00:04:50.465 Elapsed time = 0.124 seconds 00:04:50.465 00:04:50.465 real 0m0.134s 00:04:50.465 user 0m0.124s 00:04:50.465 sys 0m0.009s 00:04:50.465 04:21:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.465 04:21:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:50.465 ************************************ 00:04:50.465 END TEST env_memory 00:04:50.465 ************************************ 00:04:50.465 04:21:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.465 04:21:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.465 04:21:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 ************************************ 00:04:50.725 START TEST env_vtophys 00:04:50.725 ************************************ 00:04:50.725 04:21:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:50.725 EAL: lib.eal log level changed from notice to debug 00:04:50.725 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.725 EAL: Detected lcore 1 as core 1 on socket 0 00:04:50.725 EAL: Detected lcore 2 as core 2 on socket 0 00:04:50.725 EAL: Detected lcore 3 as core 3 on socket 0 00:04:50.725 EAL: Detected lcore 4 as core 4 on socket 0 00:04:50.725 EAL: Detected lcore 5 as core 5 on socket 0 00:04:50.725 EAL: Detected lcore 6 as core 6 on socket 0 00:04:50.725 EAL: Detected lcore 7 as core 8 on socket 0 00:04:50.725 EAL: Detected lcore 8 as core 9 on socket 0 00:04:50.725 EAL: Detected lcore 9 as core 10 on socket 0 00:04:50.725 EAL: Detected lcore 10 as core 11 on socket 0 00:04:50.725 EAL: Detected lcore 11 as core 12 on socket 0 00:04:50.725 EAL: Detected lcore 12 as core 13 on socket 0 00:04:50.725 EAL: Detected lcore 13 as core 14 on socket 0 00:04:50.725 EAL: Detected lcore 14 as core 16 on socket 0 00:04:50.725 EAL: Detected lcore 15 as core 17 on socket 0 00:04:50.725 EAL: Detected lcore 16 as core 18 on socket 0 00:04:50.725 EAL: Detected lcore 17 as core 19 on socket 0 00:04:50.725 EAL: Detected lcore 18 as core 20 on socket 0 00:04:50.725 EAL: Detected lcore 19 as core 21 on socket 0 00:04:50.725 EAL: Detected lcore 20 as core 22 on socket 0 00:04:50.725 EAL: Detected lcore 21 as core 24 on socket 0 00:04:50.725 EAL: Detected lcore 22 as core 25 on socket 0 00:04:50.725 EAL: Detected lcore 23 as core 26 on socket 0 00:04:50.725 EAL: Detected lcore 24 as core 27 on socket 0 00:04:50.725 EAL: Detected lcore 25 as core 28 on socket 0 00:04:50.725 EAL: Detected lcore 26 as core 29 on socket 0 00:04:50.725 EAL: Detected lcore 27 as core 30 on socket 0 00:04:50.725 EAL: Detected lcore 28 as core 0 on socket 1 00:04:50.725 EAL: Detected lcore 29 as core 1 on socket 1 00:04:50.725 EAL: Detected lcore 30 as core 2 on socket 1 00:04:50.725 EAL: Detected lcore 31 as core 3 on socket 1 00:04:50.725 EAL: Detected lcore 32 as core 4 on socket 1 00:04:50.725 EAL: Detected lcore 33 as core 5 on socket 1 00:04:50.725 EAL: Detected lcore 34 as core 6 on socket 1 00:04:50.726 EAL: Detected lcore 35 as core 8 on socket 1 00:04:50.726 EAL: Detected lcore 36 as core 9 on socket 1 00:04:50.726 EAL: Detected lcore 37 as core 10 on socket 1 00:04:50.726 EAL: Detected lcore 38 as core 11 on socket 1 00:04:50.726 EAL: Detected lcore 39 as core 12 on socket 1 00:04:50.726 EAL: Detected lcore 40 as core 13 on socket 1 00:04:50.726 EAL: Detected lcore 41 as core 14 on socket 1 00:04:50.726 EAL: Detected lcore 42 as core 16 on socket 1 00:04:50.726 EAL: Detected lcore 43 as core 17 on socket 1 00:04:50.726 EAL: Detected lcore 44 as core 18 on socket 1 00:04:50.726 EAL: Detected lcore 45 as core 19 on socket 1 00:04:50.726 EAL: Detected lcore 46 as core 20 on socket 1 00:04:50.726 EAL: Detected lcore 47 as core 21 on socket 1 00:04:50.726 EAL: Detected lcore 48 as core 22 on socket 1 00:04:50.726 EAL: Detected lcore 49 as core 24 on socket 1 00:04:50.726 EAL: Detected lcore 50 as core 25 on socket 1 00:04:50.726 EAL: Detected lcore 51 as core 26 on socket 1 00:04:50.726 EAL: Detected lcore 52 as core 27 on socket 1 00:04:50.726 EAL: Detected lcore 53 as core 28 on socket 1 00:04:50.726 EAL: Detected lcore 54 as core 29 on socket 1 00:04:50.726 EAL: Detected lcore 55 as core 30 on socket 1 00:04:50.726 EAL: Detected lcore 56 as core 0 on socket 0 00:04:50.726 EAL: Detected lcore 57 as core 1 on socket 0 00:04:50.726 EAL: Detected lcore 58 as core 2 on socket 0 00:04:50.726 EAL: Detected lcore 59 as core 3 on socket 0 00:04:50.726 EAL: Detected lcore 60 as core 4 on socket 0 00:04:50.726 EAL: Detected lcore 61 as core 5 on socket 0 00:04:50.726 EAL: Detected lcore 62 as core 6 on socket 0 00:04:50.726 EAL: Detected lcore 63 as core 8 on socket 0 00:04:50.726 EAL: Detected lcore 64 as core 9 on socket 0 00:04:50.726 EAL: Detected lcore 65 as core 10 on socket 0 00:04:50.726 EAL: Detected lcore 66 as core 11 on socket 0 00:04:50.726 EAL: Detected lcore 67 as core 12 on socket 0 00:04:50.726 EAL: Detected lcore 68 as core 13 on socket 0 00:04:50.726 EAL: Detected lcore 69 as core 14 on socket 0 00:04:50.726 EAL: Detected lcore 70 as core 16 on socket 0 00:04:50.726 EAL: Detected lcore 71 as core 17 on socket 0 00:04:50.726 EAL: Detected lcore 72 as core 18 on socket 0 00:04:50.726 EAL: Detected lcore 73 as core 19 on socket 0 00:04:50.726 EAL: Detected lcore 74 as core 20 on socket 0 00:04:50.726 EAL: Detected lcore 75 as core 21 on socket 0 00:04:50.726 EAL: Detected lcore 76 as core 22 on socket 0 00:04:50.726 EAL: Detected lcore 77 as core 24 on socket 0 00:04:50.726 EAL: Detected lcore 78 as core 25 on socket 0 00:04:50.726 EAL: Detected lcore 79 as core 26 on socket 0 00:04:50.726 EAL: Detected lcore 80 as core 27 on socket 0 00:04:50.726 EAL: Detected lcore 81 as core 28 on socket 0 00:04:50.726 EAL: Detected lcore 82 as core 29 on socket 0 00:04:50.726 EAL: Detected lcore 83 as core 30 on socket 0 00:04:50.726 EAL: Detected lcore 84 as core 0 on socket 1 00:04:50.726 EAL: Detected lcore 85 as core 1 on socket 1 00:04:50.726 EAL: Detected lcore 86 as core 2 on socket 1 00:04:50.726 EAL: Detected lcore 87 as core 3 on socket 1 00:04:50.726 EAL: Detected lcore 88 as core 4 on socket 1 00:04:50.726 EAL: Detected lcore 89 as core 5 on socket 1 00:04:50.726 EAL: Detected lcore 90 as core 6 on socket 1 00:04:50.726 EAL: Detected lcore 91 as core 8 on socket 1 00:04:50.726 EAL: Detected lcore 92 as core 9 on socket 1 00:04:50.726 EAL: Detected lcore 93 as core 10 on socket 1 00:04:50.726 EAL: Detected lcore 94 as core 11 on socket 1 00:04:50.726 EAL: Detected lcore 95 as core 12 on socket 1 00:04:50.726 EAL: Detected lcore 96 as core 13 on socket 1 00:04:50.726 EAL: Detected lcore 97 as core 14 on socket 1 00:04:50.726 EAL: Detected lcore 98 as core 16 on socket 1 00:04:50.726 EAL: Detected lcore 99 as core 17 on socket 1 00:04:50.726 EAL: Detected lcore 100 as core 18 on socket 1 00:04:50.726 EAL: Detected lcore 101 as core 19 on socket 1 00:04:50.726 EAL: Detected lcore 102 as core 20 on socket 1 00:04:50.726 EAL: Detected lcore 103 as core 21 on socket 1 00:04:50.726 EAL: Detected lcore 104 as core 22 on socket 1 00:04:50.726 EAL: Detected lcore 105 as core 24 on socket 1 00:04:50.726 EAL: Detected lcore 106 as core 25 on socket 1 00:04:50.726 EAL: Detected lcore 107 as core 26 on socket 1 00:04:50.726 EAL: Detected lcore 108 as core 27 on socket 1 00:04:50.726 EAL: Detected lcore 109 as core 28 on socket 1 00:04:50.726 EAL: Detected lcore 110 as core 29 on socket 1 00:04:50.726 EAL: Detected lcore 111 as core 30 on socket 1 00:04:50.726 EAL: Maximum logical cores by configuration: 128 00:04:50.726 EAL: Detected CPU lcores: 112 00:04:50.726 EAL: Detected NUMA nodes: 2 00:04:50.726 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:50.726 EAL: Detected shared linkage of DPDK 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:50.726 EAL: Registered [vdev] bus. 00:04:50.726 EAL: bus.vdev log level changed from disabled to notice 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:50.726 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:50.726 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:50.726 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:50.726 EAL: No shared files mode enabled, IPC will be disabled 00:04:50.726 EAL: No shared files mode enabled, IPC is disabled 00:04:50.726 EAL: Bus pci wants IOVA as 'DC' 00:04:50.726 EAL: Bus vdev wants IOVA as 'DC' 00:04:50.726 EAL: Buses did not request a specific IOVA mode. 00:04:50.726 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:50.726 EAL: Selected IOVA mode 'VA' 00:04:50.726 EAL: Probing VFIO support... 00:04:50.726 EAL: IOMMU type 1 (Type 1) is supported 00:04:50.726 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:50.726 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:50.726 EAL: VFIO support initialized 00:04:50.726 EAL: Ask a virtual area of 0x2e000 bytes 00:04:50.726 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:50.726 EAL: Setting up physically contiguous memory... 00:04:50.726 EAL: Setting maximum number of open files to 524288 00:04:50.726 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:50.726 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:50.726 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:50.726 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:50.726 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.726 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:50.726 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:50.726 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.726 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:50.726 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:50.726 EAL: Hugepages will be freed exactly as allocated. 00:04:50.726 EAL: No shared files mode enabled, IPC is disabled 00:04:50.726 EAL: No shared files mode enabled, IPC is disabled 00:04:50.726 EAL: TSC frequency is ~2500000 KHz 00:04:50.726 EAL: Main lcore 0 is ready (tid=7fe8be4a3a00;cpuset=[0]) 00:04:50.726 EAL: Trying to obtain current memory policy. 00:04:50.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.726 EAL: Restoring previous memory policy: 0 00:04:50.726 EAL: request: mp_malloc_sync 00:04:50.726 EAL: No shared files mode enabled, IPC is disabled 00:04:50.726 EAL: Heap on socket 0 was expanded by 2MB 00:04:50.726 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:04:50.726 EAL: probe driver: 8086:37d2 net_i40e 00:04:50.726 EAL: Not managed by a supported kernel driver, skipped 00:04:50.727 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:04:50.727 EAL: probe driver: 8086:37d2 net_i40e 00:04:50.727 EAL: Not managed by a supported kernel driver, skipped 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:50.727 EAL: Mem event callback 'spdk:(nil)' registered 00:04:50.727 00:04:50.727 00:04:50.727 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.727 http://cunit.sourceforge.net/ 00:04:50.727 00:04:50.727 00:04:50.727 Suite: components_suite 00:04:50.727 Test: vtophys_malloc_test ...passed 00:04:50.727 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 4MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 4MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 6MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 6MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 10MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 10MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 18MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 18MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.727 EAL: Restoring previous memory policy: 4 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.727 EAL: request: mp_malloc_sync 00:04:50.727 EAL: No shared files mode enabled, IPC is disabled 00:04:50.727 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.727 EAL: Trying to obtain current memory policy. 00:04:50.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.986 EAL: Restoring previous memory policy: 4 00:04:50.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.986 EAL: request: mp_malloc_sync 00:04:50.986 EAL: No shared files mode enabled, IPC is disabled 00:04:50.986 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.986 EAL: request: mp_malloc_sync 00:04:50.986 EAL: No shared files mode enabled, IPC is disabled 00:04:50.986 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.986 EAL: Trying to obtain current memory policy. 00:04:50.986 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.986 EAL: Restoring previous memory policy: 4 00:04:50.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.986 EAL: request: mp_malloc_sync 00:04:50.986 EAL: No shared files mode enabled, IPC is disabled 00:04:50.986 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.246 EAL: request: mp_malloc_sync 00:04:51.246 EAL: No shared files mode enabled, IPC is disabled 00:04:51.246 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.246 EAL: Trying to obtain current memory policy. 00:04:51.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.506 EAL: Restoring previous memory policy: 4 00:04:51.506 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.506 EAL: request: mp_malloc_sync 00:04:51.506 EAL: No shared files mode enabled, IPC is disabled 00:04:51.506 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.506 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.766 EAL: request: mp_malloc_sync 00:04:51.766 EAL: No shared files mode enabled, IPC is disabled 00:04:51.766 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.766 passed 00:04:51.766 00:04:51.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.766 suites 1 1 n/a 0 0 00:04:51.766 tests 2 2 2 0 0 00:04:51.766 asserts 497 497 497 0 n/a 00:04:51.766 00:04:51.766 Elapsed time = 0.978 seconds 00:04:51.766 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.766 EAL: request: mp_malloc_sync 00:04:51.766 EAL: No shared files mode enabled, IPC is disabled 00:04:51.766 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.766 EAL: No shared files mode enabled, IPC is disabled 00:04:51.766 EAL: No shared files mode enabled, IPC is disabled 00:04:51.766 EAL: No shared files mode enabled, IPC is disabled 00:04:51.766 00:04:51.766 real 0m1.134s 00:04:51.766 user 0m0.652s 00:04:51.766 sys 0m0.448s 00:04:51.766 04:21:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.766 04:21:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.766 ************************************ 00:04:51.766 END TEST env_vtophys 00:04:51.766 ************************************ 00:04:51.766 04:21:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.766 04:21:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.766 04:21:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.766 04:21:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.766 ************************************ 00:04:51.766 START TEST env_pci 00:04:51.766 ************************************ 00:04:51.766 04:21:30 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.766 00:04:51.766 00:04:51.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.766 http://cunit.sourceforge.net/ 00:04:51.766 00:04:51.766 00:04:51.766 Suite: pci 00:04:51.766 Test: pci_hook ...[2024-11-17 04:21:30.543183] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103476 has claimed it 00:04:51.766 EAL: Cannot find device (10000:00:01.0) 00:04:51.766 EAL: Failed to attach device on primary process 00:04:51.766 passed 00:04:51.766 00:04:51.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.766 suites 1 1 n/a 0 0 00:04:51.766 tests 1 1 1 0 0 00:04:51.766 asserts 25 25 25 0 n/a 00:04:51.766 00:04:51.766 Elapsed time = 0.036 seconds 00:04:51.766 00:04:51.766 real 0m0.057s 00:04:51.766 user 0m0.018s 00:04:51.766 sys 0m0.039s 00:04:51.766 04:21:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.766 04:21:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.766 ************************************ 00:04:51.766 END TEST env_pci 00:04:51.766 ************************************ 00:04:52.025 04:21:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.025 04:21:30 env -- env/env.sh@15 -- # uname 00:04:52.025 04:21:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.025 04:21:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.025 04:21:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.025 04:21:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:52.025 04:21:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.025 04:21:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.025 ************************************ 00:04:52.025 START TEST env_dpdk_post_init 00:04:52.025 ************************************ 00:04:52.025 04:21:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.025 EAL: Detected CPU lcores: 112 00:04:52.025 EAL: Detected NUMA nodes: 2 00:04:52.025 EAL: Detected shared linkage of DPDK 00:04:52.025 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.025 EAL: Selected IOVA mode 'VA' 00:04:52.025 EAL: VFIO support initialized 00:04:52.025 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.025 EAL: Using IOMMU type 1 (Type 1) 00:04:52.025 EAL: Ignore mapping IO port bar(1) 00:04:52.025 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:52.025 EAL: Ignore mapping IO port bar(1) 00:04:52.025 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:52.285 EAL: Ignore mapping IO port bar(1) 00:04:52.285 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:52.285 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:52.286 EAL: Ignore mapping IO port bar(1) 00:04:52.286 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:53.225 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:57.427 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:57.427 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:57.427 Starting DPDK initialization... 00:04:57.427 Starting SPDK post initialization... 00:04:57.427 SPDK NVMe probe 00:04:57.427 Attaching to 0000:d8:00.0 00:04:57.427 Attached to 0000:d8:00.0 00:04:57.427 Cleaning up... 00:04:57.427 00:04:57.427 real 0m5.372s 00:04:57.427 user 0m4.000s 00:04:57.427 sys 0m0.426s 00:04:57.427 04:21:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.427 04:21:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.427 ************************************ 00:04:57.427 END TEST env_dpdk_post_init 00:04:57.427 ************************************ 00:04:57.427 04:21:36 env -- env/env.sh@26 -- # uname 00:04:57.427 04:21:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:57.427 04:21:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.427 04:21:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.427 04:21:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.427 04:21:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.427 ************************************ 00:04:57.427 START TEST env_mem_callbacks 00:04:57.427 ************************************ 00:04:57.427 04:21:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.427 EAL: Detected CPU lcores: 112 00:04:57.427 EAL: Detected NUMA nodes: 2 00:04:57.427 EAL: Detected shared linkage of DPDK 00:04:57.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.427 EAL: Selected IOVA mode 'VA' 00:04:57.427 EAL: VFIO support initialized 00:04:57.427 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.427 00:04:57.427 00:04:57.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.427 http://cunit.sourceforge.net/ 00:04:57.427 00:04:57.427 00:04:57.427 Suite: memory 00:04:57.427 Test: test ... 00:04:57.427 register 0x200000200000 2097152 00:04:57.427 malloc 3145728 00:04:57.427 register 0x200000400000 4194304 00:04:57.427 buf 0x200000500000 len 3145728 PASSED 00:04:57.427 malloc 64 00:04:57.427 buf 0x2000004fff40 len 64 PASSED 00:04:57.427 malloc 4194304 00:04:57.427 register 0x200000800000 6291456 00:04:57.427 buf 0x200000a00000 len 4194304 PASSED 00:04:57.427 free 0x200000500000 3145728 00:04:57.427 free 0x2000004fff40 64 00:04:57.427 unregister 0x200000400000 4194304 PASSED 00:04:57.427 free 0x200000a00000 4194304 00:04:57.427 unregister 0x200000800000 6291456 PASSED 00:04:57.427 malloc 8388608 00:04:57.427 register 0x200000400000 10485760 00:04:57.427 buf 0x200000600000 len 8388608 PASSED 00:04:57.427 free 0x200000600000 8388608 00:04:57.427 unregister 0x200000400000 10485760 PASSED 00:04:57.427 passed 00:04:57.427 00:04:57.427 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.427 suites 1 1 n/a 0 0 00:04:57.427 tests 1 1 1 0 0 00:04:57.427 asserts 15 15 15 0 n/a 00:04:57.427 00:04:57.427 Elapsed time = 0.009 seconds 00:04:57.427 00:04:57.427 real 0m0.073s 00:04:57.427 user 0m0.020s 00:04:57.427 sys 0m0.052s 00:04:57.427 04:21:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.427 04:21:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:57.427 ************************************ 00:04:57.427 END TEST env_mem_callbacks 00:04:57.427 ************************************ 00:04:57.688 00:04:57.688 real 0m7.403s 00:04:57.688 user 0m5.071s 00:04:57.688 sys 0m1.400s 00:04:57.688 04:21:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.688 04:21:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.688 ************************************ 00:04:57.688 END TEST env 00:04:57.688 ************************************ 00:04:57.688 04:21:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.688 04:21:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.688 04:21:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.688 04:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.688 ************************************ 00:04:57.688 START TEST rpc 00:04:57.688 ************************************ 00:04:57.688 04:21:36 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.688 * Looking for test storage... 00:04:57.688 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:57.688 04:21:36 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.688 04:21:36 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.688 04:21:36 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.948 04:21:36 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.948 04:21:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.948 04:21:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.948 04:21:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.948 04:21:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.948 04:21:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.948 04:21:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.948 04:21:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.948 04:21:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.948 04:21:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.948 04:21:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.948 04:21:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.948 04:21:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.949 04:21:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.949 04:21:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.949 04:21:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.949 04:21:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.949 04:21:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.949 04:21:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.949 04:21:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.949 04:21:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.949 04:21:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.949 04:21:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.949 04:21:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.949 04:21:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.949 04:21:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.949 04:21:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.949 04:21:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.949 04:21:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.949 --rc genhtml_branch_coverage=1 00:04:57.949 --rc genhtml_function_coverage=1 00:04:57.949 --rc genhtml_legend=1 00:04:57.949 --rc geninfo_all_blocks=1 00:04:57.949 --rc geninfo_unexecuted_blocks=1 00:04:57.949 00:04:57.949 ' 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.949 --rc genhtml_branch_coverage=1 00:04:57.949 --rc genhtml_function_coverage=1 00:04:57.949 --rc genhtml_legend=1 00:04:57.949 --rc geninfo_all_blocks=1 00:04:57.949 --rc geninfo_unexecuted_blocks=1 00:04:57.949 00:04:57.949 ' 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.949 --rc genhtml_branch_coverage=1 00:04:57.949 --rc genhtml_function_coverage=1 00:04:57.949 --rc genhtml_legend=1 00:04:57.949 --rc geninfo_all_blocks=1 00:04:57.949 --rc geninfo_unexecuted_blocks=1 00:04:57.949 00:04:57.949 ' 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.949 --rc genhtml_branch_coverage=1 00:04:57.949 --rc genhtml_function_coverage=1 00:04:57.949 --rc genhtml_legend=1 00:04:57.949 --rc geninfo_all_blocks=1 00:04:57.949 --rc geninfo_unexecuted_blocks=1 00:04:57.949 00:04:57.949 ' 00:04:57.949 04:21:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104677 00:04:57.949 04:21:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.949 04:21:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:57.949 04:21:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104677 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 104677 ']' 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.949 04:21:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.949 [2024-11-17 04:21:36.601922] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:04:57.949 [2024-11-17 04:21:36.601991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104677 ] 00:04:57.949 [2024-11-17 04:21:36.693531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.949 [2024-11-17 04:21:36.715404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.949 [2024-11-17 04:21:36.715441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104677' to capture a snapshot of events at runtime. 00:04:57.949 [2024-11-17 04:21:36.715450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.949 [2024-11-17 04:21:36.715458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.949 [2024-11-17 04:21:36.715482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104677 for offline analysis/debug. 00:04:57.949 [2024-11-17 04:21:36.716082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.209 04:21:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.209 04:21:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.209 04:21:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:58.209 04:21:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:58.209 04:21:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.209 04:21:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.209 04:21:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.210 04:21:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.210 04:21:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.210 ************************************ 00:04:58.210 START TEST rpc_integrity 00:04:58.210 ************************************ 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.210 04:21:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.210 04:21:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.210 04:21:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.210 04:21:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.210 04:21:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.210 04:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.210 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.210 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.210 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.210 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.210 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.210 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.210 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.210 { 00:04:58.210 "name": "Malloc0", 00:04:58.210 "aliases": [ 00:04:58.210 "3adab597-b5fa-44c7-9cfb-b633d67699ad" 00:04:58.210 ], 00:04:58.210 "product_name": "Malloc disk", 00:04:58.210 "block_size": 512, 00:04:58.210 "num_blocks": 16384, 00:04:58.210 "uuid": "3adab597-b5fa-44c7-9cfb-b633d67699ad", 00:04:58.210 "assigned_rate_limits": { 00:04:58.210 "rw_ios_per_sec": 0, 00:04:58.210 "rw_mbytes_per_sec": 0, 00:04:58.210 "r_mbytes_per_sec": 0, 00:04:58.210 "w_mbytes_per_sec": 0 00:04:58.210 }, 00:04:58.210 "claimed": false, 00:04:58.210 "zoned": false, 00:04:58.210 "supported_io_types": { 00:04:58.210 "read": true, 00:04:58.210 "write": true, 00:04:58.210 "unmap": true, 00:04:58.210 "flush": true, 00:04:58.210 "reset": true, 00:04:58.210 "nvme_admin": false, 00:04:58.210 "nvme_io": false, 00:04:58.210 "nvme_io_md": false, 00:04:58.210 "write_zeroes": true, 00:04:58.210 "zcopy": true, 00:04:58.210 "get_zone_info": false, 00:04:58.210 "zone_management": false, 00:04:58.210 "zone_append": false, 00:04:58.210 "compare": false, 00:04:58.210 "compare_and_write": false, 00:04:58.210 "abort": true, 00:04:58.210 "seek_hole": false, 00:04:58.210 "seek_data": false, 00:04:58.210 "copy": true, 00:04:58.210 "nvme_iov_md": false 00:04:58.210 }, 00:04:58.210 "memory_domains": [ 00:04:58.210 { 00:04:58.210 "dma_device_id": "system", 00:04:58.210 "dma_device_type": 1 00:04:58.210 }, 00:04:58.210 { 00:04:58.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.210 "dma_device_type": 2 00:04:58.210 } 00:04:58.210 ], 00:04:58.210 "driver_specific": {} 00:04:58.210 } 00:04:58.210 ]' 00:04:58.210 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 [2024-11-17 04:21:37.078050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.471 [2024-11-17 04:21:37.078079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.471 [2024-11-17 04:21:37.078094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed8300 00:04:58.471 [2024-11-17 04:21:37.078103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.471 [2024-11-17 04:21:37.079182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.471 [2024-11-17 04:21:37.079204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.471 Passthru0 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.471 { 00:04:58.471 "name": "Malloc0", 00:04:58.471 "aliases": [ 00:04:58.471 "3adab597-b5fa-44c7-9cfb-b633d67699ad" 00:04:58.471 ], 00:04:58.471 "product_name": "Malloc disk", 00:04:58.471 "block_size": 512, 00:04:58.471 "num_blocks": 16384, 00:04:58.471 "uuid": "3adab597-b5fa-44c7-9cfb-b633d67699ad", 00:04:58.471 "assigned_rate_limits": { 00:04:58.471 "rw_ios_per_sec": 0, 00:04:58.471 "rw_mbytes_per_sec": 0, 00:04:58.471 "r_mbytes_per_sec": 0, 00:04:58.471 "w_mbytes_per_sec": 0 00:04:58.471 }, 00:04:58.471 "claimed": true, 00:04:58.471 "claim_type": "exclusive_write", 00:04:58.471 "zoned": false, 00:04:58.471 "supported_io_types": { 00:04:58.471 "read": true, 00:04:58.471 "write": true, 00:04:58.471 "unmap": true, 00:04:58.471 "flush": true, 00:04:58.471 "reset": true, 00:04:58.471 "nvme_admin": false, 00:04:58.471 "nvme_io": false, 00:04:58.471 "nvme_io_md": false, 00:04:58.471 "write_zeroes": true, 00:04:58.471 "zcopy": true, 00:04:58.471 "get_zone_info": false, 00:04:58.471 "zone_management": false, 00:04:58.471 "zone_append": false, 00:04:58.471 "compare": false, 00:04:58.471 "compare_and_write": false, 00:04:58.471 "abort": true, 00:04:58.471 "seek_hole": false, 00:04:58.471 "seek_data": false, 00:04:58.471 "copy": true, 00:04:58.471 "nvme_iov_md": false 00:04:58.471 }, 00:04:58.471 "memory_domains": [ 00:04:58.471 { 00:04:58.471 "dma_device_id": "system", 00:04:58.471 "dma_device_type": 1 00:04:58.471 }, 00:04:58.471 { 00:04:58.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.471 "dma_device_type": 2 00:04:58.471 } 00:04:58.471 ], 00:04:58.471 "driver_specific": {} 00:04:58.471 }, 00:04:58.471 { 00:04:58.471 "name": "Passthru0", 00:04:58.471 "aliases": [ 00:04:58.471 "9333f701-c6f0-545b-a76c-e19adcd98ac9" 00:04:58.471 ], 00:04:58.471 "product_name": "passthru", 00:04:58.471 "block_size": 512, 00:04:58.471 "num_blocks": 16384, 00:04:58.471 "uuid": "9333f701-c6f0-545b-a76c-e19adcd98ac9", 00:04:58.471 "assigned_rate_limits": { 00:04:58.471 "rw_ios_per_sec": 0, 00:04:58.471 "rw_mbytes_per_sec": 0, 00:04:58.471 "r_mbytes_per_sec": 0, 00:04:58.471 "w_mbytes_per_sec": 0 00:04:58.471 }, 00:04:58.471 "claimed": false, 00:04:58.471 "zoned": false, 00:04:58.471 "supported_io_types": { 00:04:58.471 "read": true, 00:04:58.471 "write": true, 00:04:58.471 "unmap": true, 00:04:58.471 "flush": true, 00:04:58.471 "reset": true, 00:04:58.471 "nvme_admin": false, 00:04:58.471 "nvme_io": false, 00:04:58.471 "nvme_io_md": false, 00:04:58.471 "write_zeroes": true, 00:04:58.471 "zcopy": true, 00:04:58.471 "get_zone_info": false, 00:04:58.471 "zone_management": false, 00:04:58.471 "zone_append": false, 00:04:58.471 "compare": false, 00:04:58.471 "compare_and_write": false, 00:04:58.471 "abort": true, 00:04:58.471 "seek_hole": false, 00:04:58.471 "seek_data": false, 00:04:58.471 "copy": true, 00:04:58.471 "nvme_iov_md": false 00:04:58.471 }, 00:04:58.471 "memory_domains": [ 00:04:58.471 { 00:04:58.471 "dma_device_id": "system", 00:04:58.471 "dma_device_type": 1 00:04:58.471 }, 00:04:58.471 { 00:04:58.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.471 "dma_device_type": 2 00:04:58.471 } 00:04:58.471 ], 00:04:58.471 "driver_specific": { 00:04:58.471 "passthru": { 00:04:58.471 "name": "Passthru0", 00:04:58.471 "base_bdev_name": "Malloc0" 00:04:58.471 } 00:04:58.471 } 00:04:58.471 } 00:04:58.471 ]' 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.471 04:21:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.471 00:04:58.471 real 0m0.288s 00:04:58.471 user 0m0.176s 00:04:58.471 sys 0m0.049s 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.471 04:21:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 ************************************ 00:04:58.471 END TEST rpc_integrity 00:04:58.471 ************************************ 00:04:58.471 04:21:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.471 04:21:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.471 04:21:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.471 04:21:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 ************************************ 00:04:58.732 START TEST rpc_plugins 00:04:58.732 ************************************ 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.732 { 00:04:58.732 "name": "Malloc1", 00:04:58.732 "aliases": [ 00:04:58.732 "33143d16-2ed7-4829-bbef-e8b7e37273ab" 00:04:58.732 ], 00:04:58.732 "product_name": "Malloc disk", 00:04:58.732 "block_size": 4096, 00:04:58.732 "num_blocks": 256, 00:04:58.732 "uuid": "33143d16-2ed7-4829-bbef-e8b7e37273ab", 00:04:58.732 "assigned_rate_limits": { 00:04:58.732 "rw_ios_per_sec": 0, 00:04:58.732 "rw_mbytes_per_sec": 0, 00:04:58.732 "r_mbytes_per_sec": 0, 00:04:58.732 "w_mbytes_per_sec": 0 00:04:58.732 }, 00:04:58.732 "claimed": false, 00:04:58.732 "zoned": false, 00:04:58.732 "supported_io_types": { 00:04:58.732 "read": true, 00:04:58.732 "write": true, 00:04:58.732 "unmap": true, 00:04:58.732 "flush": true, 00:04:58.732 "reset": true, 00:04:58.732 "nvme_admin": false, 00:04:58.732 "nvme_io": false, 00:04:58.732 "nvme_io_md": false, 00:04:58.732 "write_zeroes": true, 00:04:58.732 "zcopy": true, 00:04:58.732 "get_zone_info": false, 00:04:58.732 "zone_management": false, 00:04:58.732 "zone_append": false, 00:04:58.732 "compare": false, 00:04:58.732 "compare_and_write": false, 00:04:58.732 "abort": true, 00:04:58.732 "seek_hole": false, 00:04:58.732 "seek_data": false, 00:04:58.732 "copy": true, 00:04:58.732 "nvme_iov_md": false 00:04:58.732 }, 00:04:58.732 "memory_domains": [ 00:04:58.732 { 00:04:58.732 "dma_device_id": "system", 00:04:58.732 "dma_device_type": 1 00:04:58.732 }, 00:04:58.732 { 00:04:58.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.732 "dma_device_type": 2 00:04:58.732 } 00:04:58.732 ], 00:04:58.732 "driver_specific": {} 00:04:58.732 } 00:04:58.732 ]' 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.732 04:21:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.732 00:04:58.732 real 0m0.148s 00:04:58.732 user 0m0.087s 00:04:58.732 sys 0m0.027s 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.732 04:21:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 ************************************ 00:04:58.732 END TEST rpc_plugins 00:04:58.732 ************************************ 00:04:58.732 04:21:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.732 04:21:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.732 04:21:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.732 04:21:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 ************************************ 00:04:58.732 START TEST rpc_trace_cmd_test 00:04:58.732 ************************************ 00:04:58.732 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:58.732 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.993 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104677", 00:04:58.993 "tpoint_group_mask": "0x8", 00:04:58.993 "iscsi_conn": { 00:04:58.993 "mask": "0x2", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "scsi": { 00:04:58.993 "mask": "0x4", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "bdev": { 00:04:58.993 "mask": "0x8", 00:04:58.993 "tpoint_mask": "0xffffffffffffffff" 00:04:58.993 }, 00:04:58.993 "nvmf_rdma": { 00:04:58.993 "mask": "0x10", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "nvmf_tcp": { 00:04:58.993 "mask": "0x20", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "ftl": { 00:04:58.993 "mask": "0x40", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "blobfs": { 00:04:58.993 "mask": "0x80", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "dsa": { 00:04:58.993 "mask": "0x200", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "thread": { 00:04:58.993 "mask": "0x400", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "nvme_pcie": { 00:04:58.993 "mask": "0x800", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "iaa": { 00:04:58.993 "mask": "0x1000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "nvme_tcp": { 00:04:58.993 "mask": "0x2000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "bdev_nvme": { 00:04:58.993 "mask": "0x4000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "sock": { 00:04:58.993 "mask": "0x8000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "blob": { 00:04:58.993 "mask": "0x10000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "bdev_raid": { 00:04:58.993 "mask": "0x20000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 }, 00:04:58.993 "scheduler": { 00:04:58.993 "mask": "0x40000", 00:04:58.993 "tpoint_mask": "0x0" 00:04:58.993 } 00:04:58.993 }' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.993 00:04:58.993 real 0m0.215s 00:04:58.993 user 0m0.171s 00:04:58.993 sys 0m0.036s 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.993 04:21:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.993 ************************************ 00:04:58.993 END TEST rpc_trace_cmd_test 00:04:58.993 ************************************ 00:04:58.993 04:21:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.993 04:21:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.993 04:21:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.993 04:21:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.993 04:21:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.993 04:21:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.254 ************************************ 00:04:59.254 START TEST rpc_daemon_integrity 00:04:59.254 ************************************ 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.254 { 00:04:59.254 "name": "Malloc2", 00:04:59.254 "aliases": [ 00:04:59.254 "a3e1e632-1345-4443-b770-2e03ff833473" 00:04:59.254 ], 00:04:59.254 "product_name": "Malloc disk", 00:04:59.254 "block_size": 512, 00:04:59.254 "num_blocks": 16384, 00:04:59.254 "uuid": "a3e1e632-1345-4443-b770-2e03ff833473", 00:04:59.254 "assigned_rate_limits": { 00:04:59.254 "rw_ios_per_sec": 0, 00:04:59.254 "rw_mbytes_per_sec": 0, 00:04:59.254 "r_mbytes_per_sec": 0, 00:04:59.254 "w_mbytes_per_sec": 0 00:04:59.254 }, 00:04:59.254 "claimed": false, 00:04:59.254 "zoned": false, 00:04:59.254 "supported_io_types": { 00:04:59.254 "read": true, 00:04:59.254 "write": true, 00:04:59.254 "unmap": true, 00:04:59.254 "flush": true, 00:04:59.254 "reset": true, 00:04:59.254 "nvme_admin": false, 00:04:59.254 "nvme_io": false, 00:04:59.254 "nvme_io_md": false, 00:04:59.254 "write_zeroes": true, 00:04:59.254 "zcopy": true, 00:04:59.254 "get_zone_info": false, 00:04:59.254 "zone_management": false, 00:04:59.254 "zone_append": false, 00:04:59.254 "compare": false, 00:04:59.254 "compare_and_write": false, 00:04:59.254 "abort": true, 00:04:59.254 "seek_hole": false, 00:04:59.254 "seek_data": false, 00:04:59.254 "copy": true, 00:04:59.254 "nvme_iov_md": false 00:04:59.254 }, 00:04:59.254 "memory_domains": [ 00:04:59.254 { 00:04:59.254 "dma_device_id": "system", 00:04:59.254 "dma_device_type": 1 00:04:59.254 }, 00:04:59.254 { 00:04:59.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.254 "dma_device_type": 2 00:04:59.254 } 00:04:59.254 ], 00:04:59.254 "driver_specific": {} 00:04:59.254 } 00:04:59.254 ]' 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.254 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.255 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.255 [2024-11-17 04:21:37.996538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.255 [2024-11-17 04:21:37.996565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.255 [2024-11-17 04:21:37.996580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd97c40 00:04:59.255 [2024-11-17 04:21:37.996588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.255 [2024-11-17 04:21:37.997515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.255 [2024-11-17 04:21:37.997536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.255 Passthru0 00:04:59.255 04:21:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.255 { 00:04:59.255 "name": "Malloc2", 00:04:59.255 "aliases": [ 00:04:59.255 "a3e1e632-1345-4443-b770-2e03ff833473" 00:04:59.255 ], 00:04:59.255 "product_name": "Malloc disk", 00:04:59.255 "block_size": 512, 00:04:59.255 "num_blocks": 16384, 00:04:59.255 "uuid": "a3e1e632-1345-4443-b770-2e03ff833473", 00:04:59.255 "assigned_rate_limits": { 00:04:59.255 "rw_ios_per_sec": 0, 00:04:59.255 "rw_mbytes_per_sec": 0, 00:04:59.255 "r_mbytes_per_sec": 0, 00:04:59.255 "w_mbytes_per_sec": 0 00:04:59.255 }, 00:04:59.255 "claimed": true, 00:04:59.255 "claim_type": "exclusive_write", 00:04:59.255 "zoned": false, 00:04:59.255 "supported_io_types": { 00:04:59.255 "read": true, 00:04:59.255 "write": true, 00:04:59.255 "unmap": true, 00:04:59.255 "flush": true, 00:04:59.255 "reset": true, 00:04:59.255 "nvme_admin": false, 00:04:59.255 "nvme_io": false, 00:04:59.255 "nvme_io_md": false, 00:04:59.255 "write_zeroes": true, 00:04:59.255 "zcopy": true, 00:04:59.255 "get_zone_info": false, 00:04:59.255 "zone_management": false, 00:04:59.255 "zone_append": false, 00:04:59.255 "compare": false, 00:04:59.255 "compare_and_write": false, 00:04:59.255 "abort": true, 00:04:59.255 "seek_hole": false, 00:04:59.255 "seek_data": false, 00:04:59.255 "copy": true, 00:04:59.255 "nvme_iov_md": false 00:04:59.255 }, 00:04:59.255 "memory_domains": [ 00:04:59.255 { 00:04:59.255 "dma_device_id": "system", 00:04:59.255 "dma_device_type": 1 00:04:59.255 }, 00:04:59.255 { 00:04:59.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.255 "dma_device_type": 2 00:04:59.255 } 00:04:59.255 ], 00:04:59.255 "driver_specific": {} 00:04:59.255 }, 00:04:59.255 { 00:04:59.255 "name": "Passthru0", 00:04:59.255 "aliases": [ 00:04:59.255 "8815d63b-8baa-5e13-ba4b-573c0ccd9982" 00:04:59.255 ], 00:04:59.255 "product_name": "passthru", 00:04:59.255 "block_size": 512, 00:04:59.255 "num_blocks": 16384, 00:04:59.255 "uuid": "8815d63b-8baa-5e13-ba4b-573c0ccd9982", 00:04:59.255 "assigned_rate_limits": { 00:04:59.255 "rw_ios_per_sec": 0, 00:04:59.255 "rw_mbytes_per_sec": 0, 00:04:59.255 "r_mbytes_per_sec": 0, 00:04:59.255 "w_mbytes_per_sec": 0 00:04:59.255 }, 00:04:59.255 "claimed": false, 00:04:59.255 "zoned": false, 00:04:59.255 "supported_io_types": { 00:04:59.255 "read": true, 00:04:59.255 "write": true, 00:04:59.255 "unmap": true, 00:04:59.255 "flush": true, 00:04:59.255 "reset": true, 00:04:59.255 "nvme_admin": false, 00:04:59.255 "nvme_io": false, 00:04:59.255 "nvme_io_md": false, 00:04:59.255 "write_zeroes": true, 00:04:59.255 "zcopy": true, 00:04:59.255 "get_zone_info": false, 00:04:59.255 "zone_management": false, 00:04:59.255 "zone_append": false, 00:04:59.255 "compare": false, 00:04:59.255 "compare_and_write": false, 00:04:59.255 "abort": true, 00:04:59.255 "seek_hole": false, 00:04:59.255 "seek_data": false, 00:04:59.255 "copy": true, 00:04:59.255 "nvme_iov_md": false 00:04:59.255 }, 00:04:59.255 "memory_domains": [ 00:04:59.255 { 00:04:59.255 "dma_device_id": "system", 00:04:59.255 "dma_device_type": 1 00:04:59.255 }, 00:04:59.255 { 00:04:59.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.255 "dma_device_type": 2 00:04:59.255 } 00:04:59.255 ], 00:04:59.255 "driver_specific": { 00:04:59.255 "passthru": { 00:04:59.255 "name": "Passthru0", 00:04:59.255 "base_bdev_name": "Malloc2" 00:04:59.255 } 00:04:59.255 } 00:04:59.255 } 00:04:59.255 ]' 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.255 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.516 00:04:59.516 real 0m0.301s 00:04:59.516 user 0m0.192s 00:04:59.516 sys 0m0.049s 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.516 04:21:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.516 ************************************ 00:04:59.516 END TEST rpc_daemon_integrity 00:04:59.516 ************************************ 00:04:59.516 04:21:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.516 04:21:38 rpc -- rpc/rpc.sh@84 -- # killprocess 104677 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 104677 ']' 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@958 -- # kill -0 104677 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104677 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104677' 00:04:59.516 killing process with pid 104677 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@973 -- # kill 104677 00:04:59.516 04:21:38 rpc -- common/autotest_common.sh@978 -- # wait 104677 00:04:59.776 00:04:59.776 real 0m2.198s 00:04:59.776 user 0m2.761s 00:04:59.776 sys 0m0.836s 00:04:59.776 04:21:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.776 04:21:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.776 ************************************ 00:04:59.776 END TEST rpc 00:04:59.776 ************************************ 00:04:59.776 04:21:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.776 04:21:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.776 04:21:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.776 04:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:00.036 ************************************ 00:05:00.036 START TEST skip_rpc 00:05:00.036 ************************************ 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.036 * Looking for test storage... 00:05:00.036 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.036 04:21:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.036 --rc genhtml_branch_coverage=1 00:05:00.036 --rc genhtml_function_coverage=1 00:05:00.036 --rc genhtml_legend=1 00:05:00.036 --rc geninfo_all_blocks=1 00:05:00.036 --rc geninfo_unexecuted_blocks=1 00:05:00.036 00:05:00.036 ' 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.036 --rc genhtml_branch_coverage=1 00:05:00.036 --rc genhtml_function_coverage=1 00:05:00.036 --rc genhtml_legend=1 00:05:00.036 --rc geninfo_all_blocks=1 00:05:00.036 --rc geninfo_unexecuted_blocks=1 00:05:00.036 00:05:00.036 ' 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.036 --rc genhtml_branch_coverage=1 00:05:00.036 --rc genhtml_function_coverage=1 00:05:00.036 --rc genhtml_legend=1 00:05:00.036 --rc geninfo_all_blocks=1 00:05:00.036 --rc geninfo_unexecuted_blocks=1 00:05:00.036 00:05:00.036 ' 00:05:00.036 04:21:38 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.036 --rc genhtml_branch_coverage=1 00:05:00.036 --rc genhtml_function_coverage=1 00:05:00.036 --rc genhtml_legend=1 00:05:00.036 --rc geninfo_all_blocks=1 00:05:00.036 --rc geninfo_unexecuted_blocks=1 00:05:00.036 00:05:00.036 ' 00:05:00.036 04:21:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:00.037 04:21:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:00.037 04:21:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.037 04:21:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.037 04:21:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.037 04:21:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.297 ************************************ 00:05:00.297 START TEST skip_rpc 00:05:00.297 ************************************ 00:05:00.297 04:21:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:00.297 04:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=105135 00:05:00.297 04:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.297 04:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.297 04:21:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.297 [2024-11-17 04:21:38.921197] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:00.297 [2024-11-17 04:21:38.921241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105135 ] 00:05:00.297 [2024-11-17 04:21:39.010981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.297 [2024-11-17 04:21:39.033541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 105135 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 105135 ']' 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 105135 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105135 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105135' 00:05:05.580 killing process with pid 105135 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 105135 00:05:05.580 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 105135 00:05:05.580 00:05:05.580 real 0m5.377s 00:05:05.580 user 0m5.103s 00:05:05.580 sys 0m0.326s 00:05:05.580 04:21:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.580 04:21:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 ************************************ 00:05:05.580 END TEST skip_rpc 00:05:05.580 ************************************ 00:05:05.580 04:21:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.580 04:21:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.580 04:21:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.580 04:21:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 ************************************ 00:05:05.580 START TEST skip_rpc_with_json 00:05:05.580 ************************************ 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=106218 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 106218 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 106218 ']' 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.580 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 [2024-11-17 04:21:44.384803] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:05.580 [2024-11-17 04:21:44.384849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106218 ] 00:05:05.841 [2024-11-17 04:21:44.476394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.841 [2024-11-17 04:21:44.498740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 [2024-11-17 04:21:44.708606] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.101 request: 00:05:06.101 { 00:05:06.101 "trtype": "tcp", 00:05:06.101 "method": "nvmf_get_transports", 00:05:06.101 "req_id": 1 00:05:06.101 } 00:05:06.101 Got JSON-RPC error response 00:05:06.101 response: 00:05:06.101 { 00:05:06.101 "code": -19, 00:05:06.101 "message": "No such device" 00:05:06.101 } 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 [2024-11-17 04:21:44.720719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.101 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:06.101 { 00:05:06.101 "subsystems": [ 00:05:06.101 { 00:05:06.101 "subsystem": "fsdev", 00:05:06.101 "config": [ 00:05:06.101 { 00:05:06.101 "method": "fsdev_set_opts", 00:05:06.101 "params": { 00:05:06.101 "fsdev_io_pool_size": 65535, 00:05:06.101 "fsdev_io_cache_size": 256 00:05:06.101 } 00:05:06.101 } 00:05:06.101 ] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "keyring", 00:05:06.101 "config": [] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "iobuf", 00:05:06.101 "config": [ 00:05:06.101 { 00:05:06.101 "method": "iobuf_set_options", 00:05:06.101 "params": { 00:05:06.101 "small_pool_count": 8192, 00:05:06.101 "large_pool_count": 1024, 00:05:06.101 "small_bufsize": 8192, 00:05:06.101 "large_bufsize": 135168, 00:05:06.101 "enable_numa": false 00:05:06.101 } 00:05:06.101 } 00:05:06.101 ] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "sock", 00:05:06.101 "config": [ 00:05:06.101 { 00:05:06.101 "method": "sock_set_default_impl", 00:05:06.101 "params": { 00:05:06.101 "impl_name": "posix" 00:05:06.101 } 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "method": "sock_impl_set_options", 00:05:06.101 "params": { 00:05:06.101 "impl_name": "ssl", 00:05:06.101 "recv_buf_size": 4096, 00:05:06.101 "send_buf_size": 4096, 00:05:06.101 "enable_recv_pipe": true, 00:05:06.101 "enable_quickack": false, 00:05:06.101 "enable_placement_id": 0, 00:05:06.101 "enable_zerocopy_send_server": true, 00:05:06.101 "enable_zerocopy_send_client": false, 00:05:06.101 "zerocopy_threshold": 0, 00:05:06.101 "tls_version": 0, 00:05:06.101 "enable_ktls": false 00:05:06.101 } 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "method": "sock_impl_set_options", 00:05:06.101 "params": { 00:05:06.101 "impl_name": "posix", 00:05:06.101 "recv_buf_size": 2097152, 00:05:06.101 "send_buf_size": 2097152, 00:05:06.101 "enable_recv_pipe": true, 00:05:06.101 "enable_quickack": false, 00:05:06.101 "enable_placement_id": 0, 00:05:06.101 "enable_zerocopy_send_server": true, 00:05:06.101 "enable_zerocopy_send_client": false, 00:05:06.101 "zerocopy_threshold": 0, 00:05:06.101 "tls_version": 0, 00:05:06.101 "enable_ktls": false 00:05:06.101 } 00:05:06.101 } 00:05:06.101 ] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "vmd", 00:05:06.101 "config": [] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "accel", 00:05:06.101 "config": [ 00:05:06.101 { 00:05:06.101 "method": "accel_set_options", 00:05:06.101 "params": { 00:05:06.101 "small_cache_size": 128, 00:05:06.101 "large_cache_size": 16, 00:05:06.101 "task_count": 2048, 00:05:06.101 "sequence_count": 2048, 00:05:06.101 "buf_count": 2048 00:05:06.101 } 00:05:06.101 } 00:05:06.101 ] 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "subsystem": "bdev", 00:05:06.101 "config": [ 00:05:06.101 { 00:05:06.101 "method": "bdev_set_options", 00:05:06.101 "params": { 00:05:06.101 "bdev_io_pool_size": 65535, 00:05:06.101 "bdev_io_cache_size": 256, 00:05:06.101 "bdev_auto_examine": true, 00:05:06.101 "iobuf_small_cache_size": 128, 00:05:06.101 "iobuf_large_cache_size": 16 00:05:06.101 } 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "method": "bdev_raid_set_options", 00:05:06.101 "params": { 00:05:06.101 "process_window_size_kb": 1024, 00:05:06.101 "process_max_bandwidth_mb_sec": 0 00:05:06.101 } 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "method": "bdev_iscsi_set_options", 00:05:06.101 "params": { 00:05:06.101 "timeout_sec": 30 00:05:06.101 } 00:05:06.101 }, 00:05:06.101 { 00:05:06.101 "method": "bdev_nvme_set_options", 00:05:06.101 "params": { 00:05:06.101 "action_on_timeout": "none", 00:05:06.101 "timeout_us": 0, 00:05:06.101 "timeout_admin_us": 0, 00:05:06.101 "keep_alive_timeout_ms": 10000, 00:05:06.101 "arbitration_burst": 0, 00:05:06.101 "low_priority_weight": 0, 00:05:06.101 "medium_priority_weight": 0, 00:05:06.101 "high_priority_weight": 0, 00:05:06.101 "nvme_adminq_poll_period_us": 10000, 00:05:06.101 "nvme_ioq_poll_period_us": 0, 00:05:06.101 "io_queue_requests": 0, 00:05:06.101 "delay_cmd_submit": true, 00:05:06.101 "transport_retry_count": 4, 00:05:06.101 "bdev_retry_count": 3, 00:05:06.101 "transport_ack_timeout": 0, 00:05:06.101 "ctrlr_loss_timeout_sec": 0, 00:05:06.101 "reconnect_delay_sec": 0, 00:05:06.101 "fast_io_fail_timeout_sec": 0, 00:05:06.101 "disable_auto_failback": false, 00:05:06.101 "generate_uuids": false, 00:05:06.101 "transport_tos": 0, 00:05:06.101 "nvme_error_stat": false, 00:05:06.101 "rdma_srq_size": 0, 00:05:06.101 "io_path_stat": false, 00:05:06.101 "allow_accel_sequence": false, 00:05:06.101 "rdma_max_cq_size": 0, 00:05:06.101 "rdma_cm_event_timeout_ms": 0, 00:05:06.101 "dhchap_digests": [ 00:05:06.102 "sha256", 00:05:06.102 "sha384", 00:05:06.102 "sha512" 00:05:06.102 ], 00:05:06.102 "dhchap_dhgroups": [ 00:05:06.102 "null", 00:05:06.102 "ffdhe2048", 00:05:06.102 "ffdhe3072", 00:05:06.102 "ffdhe4096", 00:05:06.102 "ffdhe6144", 00:05:06.102 "ffdhe8192" 00:05:06.102 ] 00:05:06.102 } 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "method": "bdev_nvme_set_hotplug", 00:05:06.102 "params": { 00:05:06.102 "period_us": 100000, 00:05:06.102 "enable": false 00:05:06.102 } 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "method": "bdev_wait_for_examine" 00:05:06.102 } 00:05:06.102 ] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "scsi", 00:05:06.102 "config": null 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "scheduler", 00:05:06.102 "config": [ 00:05:06.102 { 00:05:06.102 "method": "framework_set_scheduler", 00:05:06.102 "params": { 00:05:06.102 "name": "static" 00:05:06.102 } 00:05:06.102 } 00:05:06.102 ] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "vhost_scsi", 00:05:06.102 "config": [] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "vhost_blk", 00:05:06.102 "config": [] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "ublk", 00:05:06.102 "config": [] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "nbd", 00:05:06.102 "config": [] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "nvmf", 00:05:06.102 "config": [ 00:05:06.102 { 00:05:06.102 "method": "nvmf_set_config", 00:05:06.102 "params": { 00:05:06.102 "discovery_filter": "match_any", 00:05:06.102 "admin_cmd_passthru": { 00:05:06.102 "identify_ctrlr": false 00:05:06.102 }, 00:05:06.102 "dhchap_digests": [ 00:05:06.102 "sha256", 00:05:06.102 "sha384", 00:05:06.102 "sha512" 00:05:06.102 ], 00:05:06.102 "dhchap_dhgroups": [ 00:05:06.102 "null", 00:05:06.102 "ffdhe2048", 00:05:06.102 "ffdhe3072", 00:05:06.102 "ffdhe4096", 00:05:06.102 "ffdhe6144", 00:05:06.102 "ffdhe8192" 00:05:06.102 ] 00:05:06.102 } 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "method": "nvmf_set_max_subsystems", 00:05:06.102 "params": { 00:05:06.102 "max_subsystems": 1024 00:05:06.102 } 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "method": "nvmf_set_crdt", 00:05:06.102 "params": { 00:05:06.102 "crdt1": 0, 00:05:06.102 "crdt2": 0, 00:05:06.102 "crdt3": 0 00:05:06.102 } 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "method": "nvmf_create_transport", 00:05:06.102 "params": { 00:05:06.102 "trtype": "TCP", 00:05:06.102 "max_queue_depth": 128, 00:05:06.102 "max_io_qpairs_per_ctrlr": 127, 00:05:06.102 "in_capsule_data_size": 4096, 00:05:06.102 "max_io_size": 131072, 00:05:06.102 "io_unit_size": 131072, 00:05:06.102 "max_aq_depth": 128, 00:05:06.102 "num_shared_buffers": 511, 00:05:06.102 "buf_cache_size": 4294967295, 00:05:06.102 "dif_insert_or_strip": false, 00:05:06.102 "zcopy": false, 00:05:06.102 "c2h_success": true, 00:05:06.102 "sock_priority": 0, 00:05:06.102 "abort_timeout_sec": 1, 00:05:06.102 "ack_timeout": 0, 00:05:06.102 "data_wr_pool_size": 0 00:05:06.102 } 00:05:06.102 } 00:05:06.102 ] 00:05:06.102 }, 00:05:06.102 { 00:05:06.102 "subsystem": "iscsi", 00:05:06.102 "config": [ 00:05:06.102 { 00:05:06.102 "method": "iscsi_set_options", 00:05:06.102 "params": { 00:05:06.102 "node_base": "iqn.2016-06.io.spdk", 00:05:06.102 "max_sessions": 128, 00:05:06.102 "max_connections_per_session": 2, 00:05:06.102 "max_queue_depth": 64, 00:05:06.102 "default_time2wait": 2, 00:05:06.102 "default_time2retain": 20, 00:05:06.102 "first_burst_length": 8192, 00:05:06.102 "immediate_data": true, 00:05:06.102 "allow_duplicated_isid": false, 00:05:06.102 "error_recovery_level": 0, 00:05:06.102 "nop_timeout": 60, 00:05:06.102 "nop_in_interval": 30, 00:05:06.102 "disable_chap": false, 00:05:06.102 "require_chap": false, 00:05:06.102 "mutual_chap": false, 00:05:06.102 "chap_group": 0, 00:05:06.102 "max_large_datain_per_connection": 64, 00:05:06.102 "max_r2t_per_connection": 4, 00:05:06.102 "pdu_pool_size": 36864, 00:05:06.102 "immediate_data_pool_size": 16384, 00:05:06.102 "data_out_pool_size": 2048 00:05:06.102 } 00:05:06.102 } 00:05:06.102 ] 00:05:06.102 } 00:05:06.102 ] 00:05:06.102 } 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 106218 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 106218 ']' 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 106218 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.102 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106218 00:05:06.362 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.362 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.362 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106218' 00:05:06.362 killing process with pid 106218 00:05:06.362 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 106218 00:05:06.362 04:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 106218 00:05:06.622 04:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=106251 00:05:06.622 04:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.622 04:21:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 106251 ']' 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106251' 00:05:11.907 killing process with pid 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 106251 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:11.907 00:05:11.907 real 0m6.297s 00:05:11.907 user 0m5.956s 00:05:11.907 sys 0m0.685s 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.907 ************************************ 00:05:11.907 END TEST skip_rpc_with_json 00:05:11.907 ************************************ 00:05:11.907 04:21:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:11.907 04:21:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.907 04:21:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.907 04:21:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.907 ************************************ 00:05:11.907 START TEST skip_rpc_with_delay 00:05:11.907 ************************************ 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.907 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.168 [2024-11-17 04:21:50.773652] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.168 00:05:12.168 real 0m0.076s 00:05:12.168 user 0m0.043s 00:05:12.168 sys 0m0.033s 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.168 04:21:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 ************************************ 00:05:12.168 END TEST skip_rpc_with_delay 00:05:12.168 ************************************ 00:05:12.168 04:21:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.168 04:21:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.168 04:21:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.168 04:21:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.168 04:21:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.168 04:21:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 ************************************ 00:05:12.168 START TEST exit_on_failed_rpc_init 00:05:12.168 ************************************ 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=107354 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 107354 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 107354 ']' 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.168 04:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 [2024-11-17 04:21:50.931416] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:12.168 [2024-11-17 04:21:50.931464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107354 ] 00:05:12.428 [2024-11-17 04:21:51.023613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.428 [2024-11-17 04:21:51.046046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.428 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.689 [2024-11-17 04:21:51.287813] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:12.689 [2024-11-17 04:21:51.287858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107364 ] 00:05:12.689 [2024-11-17 04:21:51.378576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.689 [2024-11-17 04:21:51.400734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.689 [2024-11-17 04:21:51.400812] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:12.689 [2024-11-17 04:21:51.400824] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:12.689 [2024-11-17 04:21:51.400832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 107354 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 107354 ']' 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 107354 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107354 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107354' 00:05:12.689 killing process with pid 107354 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 107354 00:05:12.689 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 107354 00:05:13.261 00:05:13.261 real 0m0.918s 00:05:13.261 user 0m0.917s 00:05:13.261 sys 0m0.444s 00:05:13.261 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.261 04:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 ************************************ 00:05:13.261 END TEST exit_on_failed_rpc_init 00:05:13.261 ************************************ 00:05:13.261 04:21:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:13.261 00:05:13.261 real 0m13.208s 00:05:13.261 user 0m12.241s 00:05:13.261 sys 0m1.850s 00:05:13.261 04:21:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.261 04:21:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 ************************************ 00:05:13.261 END TEST skip_rpc 00:05:13.261 ************************************ 00:05:13.261 04:21:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.261 04:21:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.261 04:21:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.261 04:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 ************************************ 00:05:13.261 START TEST rpc_client 00:05:13.261 ************************************ 00:05:13.261 04:21:51 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.261 * Looking for test storage... 00:05:13.261 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:13.261 04:21:52 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.261 04:21:52 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.261 04:21:52 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.522 04:21:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.522 --rc genhtml_branch_coverage=1 00:05:13.522 --rc genhtml_function_coverage=1 00:05:13.522 --rc genhtml_legend=1 00:05:13.522 --rc geninfo_all_blocks=1 00:05:13.522 --rc geninfo_unexecuted_blocks=1 00:05:13.522 00:05:13.522 ' 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.522 --rc genhtml_branch_coverage=1 00:05:13.522 --rc genhtml_function_coverage=1 00:05:13.522 --rc genhtml_legend=1 00:05:13.522 --rc geninfo_all_blocks=1 00:05:13.522 --rc geninfo_unexecuted_blocks=1 00:05:13.522 00:05:13.522 ' 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.522 --rc genhtml_branch_coverage=1 00:05:13.522 --rc genhtml_function_coverage=1 00:05:13.522 --rc genhtml_legend=1 00:05:13.522 --rc geninfo_all_blocks=1 00:05:13.522 --rc geninfo_unexecuted_blocks=1 00:05:13.522 00:05:13.522 ' 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.522 --rc genhtml_branch_coverage=1 00:05:13.522 --rc genhtml_function_coverage=1 00:05:13.522 --rc genhtml_legend=1 00:05:13.522 --rc geninfo_all_blocks=1 00:05:13.522 --rc geninfo_unexecuted_blocks=1 00:05:13.522 00:05:13.522 ' 00:05:13.522 04:21:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:13.522 OK 00:05:13.522 04:21:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.522 00:05:13.522 real 0m0.217s 00:05:13.522 user 0m0.116s 00:05:13.522 sys 0m0.117s 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.522 04:21:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:13.522 ************************************ 00:05:13.522 END TEST rpc_client 00:05:13.522 ************************************ 00:05:13.522 04:21:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.522 04:21:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.523 04:21:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.523 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.523 ************************************ 00:05:13.523 START TEST json_config 00:05:13.523 ************************************ 00:05:13.523 04:21:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.523 04:21:52 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.523 04:21:52 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.523 04:21:52 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.784 04:21:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.784 04:21:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.784 04:21:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.784 04:21:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.784 04:21:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.784 04:21:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:13.784 04:21:52 json_config -- scripts/common.sh@345 -- # : 1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.784 04:21:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.784 04:21:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@353 -- # local d=1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.784 04:21:52 json_config -- scripts/common.sh@355 -- # echo 1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.784 04:21:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@353 -- # local d=2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.784 04:21:52 json_config -- scripts/common.sh@355 -- # echo 2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.784 04:21:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.784 04:21:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.784 04:21:52 json_config -- scripts/common.sh@368 -- # return 0 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.784 --rc genhtml_branch_coverage=1 00:05:13.784 --rc genhtml_function_coverage=1 00:05:13.784 --rc genhtml_legend=1 00:05:13.784 --rc geninfo_all_blocks=1 00:05:13.784 --rc geninfo_unexecuted_blocks=1 00:05:13.784 00:05:13.784 ' 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.784 --rc genhtml_branch_coverage=1 00:05:13.784 --rc genhtml_function_coverage=1 00:05:13.784 --rc genhtml_legend=1 00:05:13.784 --rc geninfo_all_blocks=1 00:05:13.784 --rc geninfo_unexecuted_blocks=1 00:05:13.784 00:05:13.784 ' 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.784 --rc genhtml_branch_coverage=1 00:05:13.784 --rc genhtml_function_coverage=1 00:05:13.784 --rc genhtml_legend=1 00:05:13.784 --rc geninfo_all_blocks=1 00:05:13.784 --rc geninfo_unexecuted_blocks=1 00:05:13.784 00:05:13.784 ' 00:05:13.784 04:21:52 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.784 --rc genhtml_branch_coverage=1 00:05:13.784 --rc genhtml_function_coverage=1 00:05:13.784 --rc genhtml_legend=1 00:05:13.784 --rc geninfo_all_blocks=1 00:05:13.784 --rc geninfo_unexecuted_blocks=1 00:05:13.784 00:05:13.784 ' 00:05:13.784 04:21:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.784 04:21:52 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:13.784 04:21:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.784 04:21:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.784 04:21:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.784 04:21:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.784 04:21:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.784 04:21:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.785 04:21:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.785 04:21:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:13.785 04:21:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@51 -- # : 0 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.785 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.785 04:21:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:13.785 INFO: JSON configuration test init 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.785 04:21:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:13.785 04:21:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.785 04:21:52 json_config -- json_config/common.sh@10 -- # shift 00:05:13.785 04:21:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.785 04:21:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.785 04:21:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.785 04:21:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.785 04:21:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.785 04:21:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107753 00:05:13.785 04:21:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.785 Waiting for target to run... 00:05:13.785 04:21:52 json_config -- json_config/common.sh@25 -- # waitforlisten 107753 /var/tmp/spdk_tgt.sock 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 107753 ']' 00:05:13.785 04:21:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.785 04:21:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.785 [2024-11-17 04:21:52.495146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:13.785 [2024-11-17 04:21:52.495199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107753 ] 00:05:14.356 [2024-11-17 04:21:52.950000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.356 [2024-11-17 04:21:52.970352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:14.616 04:21:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.616 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.616 04:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:14.616 04:21:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:14.616 04:21:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.912 04:21:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:17.912 04:21:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.912 04:21:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.912 04:21:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 04:21:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.912 04:21:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.912 04:21:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.913 04:21:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:17.913 04:21:56 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:17.913 04:21:56 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.913 04:21:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:26.047 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:26.047 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:26.047 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:26.047 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@62 -- # uname 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:26.047 04:22:03 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@78 -- # ip= 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_0 up 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:26.048 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:26.048 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:26.048 altname enp217s0f0np0 00:05:26.048 altname ens818f0np0 00:05:26.048 inet 192.168.100.8/24 scope global mlx_0_0 00:05:26.048 valid_lft forever preferred_lft forever 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@78 -- # ip= 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_1 up 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:26.048 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:26.048 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:26.048 altname enp217s0f1np1 00:05:26.048 altname ens818f1np1 00:05:26.048 inet 192.168.100.9/24 scope global mlx_0_1 00:05:26.048 valid_lft forever preferred_lft forever 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@450 -- # return 0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:26.048 192.168.100.9' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:26.048 192.168.100.9' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:26.048 192.168.100.9' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:26.048 04:22:03 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:26.048 04:22:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:26.048 04:22:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.048 04:22:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.048 MallocForNvmf0 00:05:26.048 04:22:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.048 04:22:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.048 MallocForNvmf1 00:05:26.048 04:22:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:26.048 04:22:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:26.048 [2024-11-17 04:22:04.511115] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:26.048 [2024-11-17 04:22:04.574740] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18d0c30/0x17ad780) succeed. 00:05:26.048 [2024-11-17 04:22:04.586912] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18d3e30/0x17eee20) succeed. 00:05:26.048 04:22:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.048 04:22:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.048 04:22:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.048 04:22:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.309 04:22:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.309 04:22:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.569 04:22:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:26.569 04:22:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:26.569 [2024-11-17 04:22:05.384184] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:26.829 04:22:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:26.829 04:22:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.829 04:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 04:22:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:26.829 04:22:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.829 04:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 04:22:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:26.829 04:22:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.829 04:22:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.089 MallocBdevForConfigChangeCheck 00:05:27.089 04:22:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:27.089 04:22:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.089 04:22:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.089 04:22:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:27.089 04:22:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.349 04:22:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:27.349 INFO: shutting down applications... 00:05:27.349 04:22:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:27.349 04:22:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:27.349 04:22:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:27.349 04:22:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:29.891 Calling clear_iscsi_subsystem 00:05:29.891 Calling clear_nvmf_subsystem 00:05:29.891 Calling clear_nbd_subsystem 00:05:29.891 Calling clear_ublk_subsystem 00:05:29.891 Calling clear_vhost_blk_subsystem 00:05:29.891 Calling clear_vhost_scsi_subsystem 00:05:29.891 Calling clear_bdev_subsystem 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:29.891 04:22:08 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:30.462 04:22:09 json_config -- json_config/json_config.sh@352 -- # break 00:05:30.462 04:22:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:30.462 04:22:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:30.462 04:22:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:30.462 04:22:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.462 04:22:09 json_config -- json_config/common.sh@35 -- # [[ -n 107753 ]] 00:05:30.462 04:22:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107753 00:05:30.462 04:22:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.462 04:22:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.462 04:22:09 json_config -- json_config/common.sh@41 -- # kill -0 107753 00:05:30.462 04:22:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.722 04:22:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.722 04:22:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.722 04:22:09 json_config -- json_config/common.sh@41 -- # kill -0 107753 00:05:30.722 04:22:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.722 04:22:09 json_config -- json_config/common.sh@43 -- # break 00:05:30.722 04:22:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.722 04:22:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.722 SPDK target shutdown done 00:05:30.722 04:22:09 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:30.722 INFO: relaunching applications... 00:05:30.722 04:22:09 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.722 04:22:09 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.722 04:22:09 json_config -- json_config/common.sh@10 -- # shift 00:05:30.722 04:22:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.722 04:22:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.722 04:22:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.722 04:22:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.722 04:22:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.722 04:22:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112864 00:05:30.722 04:22:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.722 Waiting for target to run... 00:05:30.722 04:22:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.722 04:22:09 json_config -- json_config/common.sh@25 -- # waitforlisten 112864 /var/tmp/spdk_tgt.sock 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@835 -- # '[' -z 112864 ']' 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.722 04:22:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 [2024-11-17 04:22:09.582483] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:30.983 [2024-11-17 04:22:09.582550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112864 ] 00:05:31.243 [2024-11-17 04:22:10.044867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.243 [2024-11-17 04:22:10.065173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.541 [2024-11-17 04:22:13.112248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fa75d0/0x1e95950) succeed. 00:05:34.541 [2024-11-17 04:22:13.123532] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fa4580/0x1ed6ff0) succeed. 00:05:34.541 [2024-11-17 04:22:13.172803] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:35.112 04:22:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.112 04:22:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:35.112 04:22:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:35.112 00:05:35.112 04:22:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:35.112 04:22:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:35.112 INFO: Checking if target configuration is the same... 00:05:35.112 04:22:13 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.112 04:22:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:35.112 04:22:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.112 + '[' 2 -ne 2 ']' 00:05:35.112 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:35.112 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:35.112 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:35.112 +++ basename /dev/fd/62 00:05:35.112 ++ mktemp /tmp/62.XXX 00:05:35.112 + tmp_file_1=/tmp/62.rYr 00:05:35.112 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.112 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.112 + tmp_file_2=/tmp/spdk_tgt_config.json.5xK 00:05:35.112 + ret=0 00:05:35.112 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:35.372 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:35.372 + diff -u /tmp/62.rYr /tmp/spdk_tgt_config.json.5xK 00:05:35.372 + echo 'INFO: JSON config files are the same' 00:05:35.372 INFO: JSON config files are the same 00:05:35.372 + rm /tmp/62.rYr /tmp/spdk_tgt_config.json.5xK 00:05:35.372 + exit 0 00:05:35.632 04:22:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:35.632 04:22:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:35.632 INFO: changing configuration and checking if this can be detected... 00:05:35.632 04:22:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.632 04:22:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.632 04:22:14 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.632 04:22:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:35.632 04:22:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.632 + '[' 2 -ne 2 ']' 00:05:35.632 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:35.632 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:35.632 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:35.632 +++ basename /dev/fd/62 00:05:35.632 ++ mktemp /tmp/62.XXX 00:05:35.632 + tmp_file_1=/tmp/62.x6I 00:05:35.632 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.632 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.632 + tmp_file_2=/tmp/spdk_tgt_config.json.oKX 00:05:35.632 + ret=0 00:05:35.632 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:36.202 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:36.202 + diff -u /tmp/62.x6I /tmp/spdk_tgt_config.json.oKX 00:05:36.202 + ret=1 00:05:36.202 + echo '=== Start of file: /tmp/62.x6I ===' 00:05:36.202 + cat /tmp/62.x6I 00:05:36.202 + echo '=== End of file: /tmp/62.x6I ===' 00:05:36.202 + echo '' 00:05:36.202 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oKX ===' 00:05:36.202 + cat /tmp/spdk_tgt_config.json.oKX 00:05:36.202 + echo '=== End of file: /tmp/spdk_tgt_config.json.oKX ===' 00:05:36.202 + echo '' 00:05:36.202 + rm /tmp/62.x6I /tmp/spdk_tgt_config.json.oKX 00:05:36.202 + exit 1 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:36.202 INFO: configuration change detected. 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 112864 ]] 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.202 04:22:14 json_config -- json_config/json_config.sh@330 -- # killprocess 112864 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@954 -- # '[' -z 112864 ']' 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@958 -- # kill -0 112864 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@959 -- # uname 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112864 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112864' 00:05:36.202 killing process with pid 112864 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@973 -- # kill 112864 00:05:36.202 04:22:14 json_config -- common/autotest_common.sh@978 -- # wait 112864 00:05:38.744 04:22:17 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.744 04:22:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:38.744 04:22:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.744 04:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.744 04:22:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:38.744 04:22:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:38.744 INFO: Success 00:05:38.744 04:22:17 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@121 -- # sync 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:38.744 04:22:17 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:38.744 00:05:38.744 real 0m25.284s 00:05:38.744 user 0m28.074s 00:05:38.744 sys 0m7.977s 00:05:38.744 04:22:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.744 04:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.744 ************************************ 00:05:38.744 END TEST json_config 00:05:38.744 ************************************ 00:05:38.744 04:22:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.744 04:22:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.744 04:22:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.744 04:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:39.005 ************************************ 00:05:39.005 START TEST json_config_extra_key 00:05:39.005 ************************************ 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.006 --rc genhtml_branch_coverage=1 00:05:39.006 --rc genhtml_function_coverage=1 00:05:39.006 --rc genhtml_legend=1 00:05:39.006 --rc geninfo_all_blocks=1 00:05:39.006 --rc geninfo_unexecuted_blocks=1 00:05:39.006 00:05:39.006 ' 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.006 --rc genhtml_branch_coverage=1 00:05:39.006 --rc genhtml_function_coverage=1 00:05:39.006 --rc genhtml_legend=1 00:05:39.006 --rc geninfo_all_blocks=1 00:05:39.006 --rc geninfo_unexecuted_blocks=1 00:05:39.006 00:05:39.006 ' 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.006 --rc genhtml_branch_coverage=1 00:05:39.006 --rc genhtml_function_coverage=1 00:05:39.006 --rc genhtml_legend=1 00:05:39.006 --rc geninfo_all_blocks=1 00:05:39.006 --rc geninfo_unexecuted_blocks=1 00:05:39.006 00:05:39.006 ' 00:05:39.006 04:22:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.006 --rc genhtml_branch_coverage=1 00:05:39.006 --rc genhtml_function_coverage=1 00:05:39.006 --rc genhtml_legend=1 00:05:39.006 --rc geninfo_all_blocks=1 00:05:39.006 --rc geninfo_unexecuted_blocks=1 00:05:39.006 00:05:39.006 ' 00:05:39.006 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.006 04:22:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.006 04:22:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.006 04:22:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.006 04:22:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.006 04:22:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.006 04:22:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.006 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.006 04:22:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.006 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:39.006 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.006 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.007 INFO: launching applications... 00:05:39.007 04:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=114381 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.007 Waiting for target to run... 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 114381 /var/tmp/spdk_tgt.sock 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 114381 ']' 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.007 04:22:17 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.007 04:22:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.267 [2024-11-17 04:22:17.852259] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:39.267 [2024-11-17 04:22:17.852311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114381 ] 00:05:39.527 [2024-11-17 04:22:18.165546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.527 [2024-11-17 04:22:18.178696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.098 04:22:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.098 04:22:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:40.098 00:05:40.098 04:22:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:40.098 INFO: shutting down applications... 00:05:40.098 04:22:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 114381 ]] 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 114381 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 114381 00:05:40.098 04:22:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 114381 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.668 04:22:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.668 SPDK target shutdown done 00:05:40.668 04:22:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.668 Success 00:05:40.668 00:05:40.668 real 0m1.600s 00:05:40.668 user 0m1.299s 00:05:40.668 sys 0m0.469s 00:05:40.668 04:22:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.668 04:22:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.668 ************************************ 00:05:40.668 END TEST json_config_extra_key 00:05:40.668 ************************************ 00:05:40.668 04:22:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.668 04:22:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.668 04:22:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.668 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.668 ************************************ 00:05:40.668 START TEST alias_rpc 00:05:40.668 ************************************ 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.668 * Looking for test storage... 00:05:40.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.668 04:22:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.668 --rc genhtml_branch_coverage=1 00:05:40.668 --rc genhtml_function_coverage=1 00:05:40.668 --rc genhtml_legend=1 00:05:40.668 --rc geninfo_all_blocks=1 00:05:40.668 --rc geninfo_unexecuted_blocks=1 00:05:40.668 00:05:40.668 ' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.668 --rc genhtml_branch_coverage=1 00:05:40.668 --rc genhtml_function_coverage=1 00:05:40.668 --rc genhtml_legend=1 00:05:40.668 --rc geninfo_all_blocks=1 00:05:40.668 --rc geninfo_unexecuted_blocks=1 00:05:40.668 00:05:40.668 ' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.668 --rc genhtml_branch_coverage=1 00:05:40.668 --rc genhtml_function_coverage=1 00:05:40.668 --rc genhtml_legend=1 00:05:40.668 --rc geninfo_all_blocks=1 00:05:40.668 --rc geninfo_unexecuted_blocks=1 00:05:40.668 00:05:40.668 ' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.668 --rc genhtml_branch_coverage=1 00:05:40.668 --rc genhtml_function_coverage=1 00:05:40.668 --rc genhtml_legend=1 00:05:40.668 --rc geninfo_all_blocks=1 00:05:40.668 --rc geninfo_unexecuted_blocks=1 00:05:40.668 00:05:40.668 ' 00:05:40.668 04:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.668 04:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=114770 00:05:40.668 04:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 114770 00:05:40.668 04:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 114770 ']' 00:05:40.668 04:22:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.669 04:22:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.669 04:22:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.669 04:22:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.669 04:22:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.928 [2024-11-17 04:22:19.519290] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:40.928 [2024-11-17 04:22:19.519343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114770 ] 00:05:40.928 [2024-11-17 04:22:19.612233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.928 [2024-11-17 04:22:19.634400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.188 04:22:19 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.188 04:22:19 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.188 04:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.448 04:22:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 114770 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 114770 ']' 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 114770 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114770 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114770' 00:05:41.448 killing process with pid 114770 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 114770 00:05:41.448 04:22:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 114770 00:05:41.709 00:05:41.709 real 0m1.155s 00:05:41.709 user 0m1.137s 00:05:41.709 sys 0m0.472s 00:05:41.709 04:22:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.709 04:22:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.709 ************************************ 00:05:41.709 END TEST alias_rpc 00:05:41.709 ************************************ 00:05:41.709 04:22:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:41.709 04:22:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.709 04:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.709 04:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.709 04:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:41.709 ************************************ 00:05:41.709 START TEST spdkcli_tcp 00:05:41.709 ************************************ 00:05:41.709 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.969 * Looking for test storage... 00:05:41.969 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:41.969 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.969 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.969 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.969 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.969 04:22:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.970 04:22:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.970 04:22:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=114983 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.970 04:22:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 114983 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 114983 ']' 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.970 04:22:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.970 [2024-11-17 04:22:20.762749] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:41.970 [2024-11-17 04:22:20.762802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114983 ] 00:05:42.230 [2024-11-17 04:22:20.857443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.230 [2024-11-17 04:22:20.881171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.230 [2024-11-17 04:22:20.881173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.491 04:22:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.491 04:22:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:42.491 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=115124 00:05:42.491 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.491 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.491 [ 00:05:42.491 "bdev_malloc_delete", 00:05:42.491 "bdev_malloc_create", 00:05:42.491 "bdev_null_resize", 00:05:42.491 "bdev_null_delete", 00:05:42.491 "bdev_null_create", 00:05:42.491 "bdev_nvme_cuse_unregister", 00:05:42.491 "bdev_nvme_cuse_register", 00:05:42.491 "bdev_opal_new_user", 00:05:42.491 "bdev_opal_set_lock_state", 00:05:42.491 "bdev_opal_delete", 00:05:42.491 "bdev_opal_get_info", 00:05:42.491 "bdev_opal_create", 00:05:42.491 "bdev_nvme_opal_revert", 00:05:42.491 "bdev_nvme_opal_init", 00:05:42.491 "bdev_nvme_send_cmd", 00:05:42.491 "bdev_nvme_set_keys", 00:05:42.491 "bdev_nvme_get_path_iostat", 00:05:42.491 "bdev_nvme_get_mdns_discovery_info", 00:05:42.491 "bdev_nvme_stop_mdns_discovery", 00:05:42.491 "bdev_nvme_start_mdns_discovery", 00:05:42.491 "bdev_nvme_set_multipath_policy", 00:05:42.491 "bdev_nvme_set_preferred_path", 00:05:42.491 "bdev_nvme_get_io_paths", 00:05:42.491 "bdev_nvme_remove_error_injection", 00:05:42.491 "bdev_nvme_add_error_injection", 00:05:42.491 "bdev_nvme_get_discovery_info", 00:05:42.491 "bdev_nvme_stop_discovery", 00:05:42.491 "bdev_nvme_start_discovery", 00:05:42.491 "bdev_nvme_get_controller_health_info", 00:05:42.491 "bdev_nvme_disable_controller", 00:05:42.491 "bdev_nvme_enable_controller", 00:05:42.491 "bdev_nvme_reset_controller", 00:05:42.491 "bdev_nvme_get_transport_statistics", 00:05:42.491 "bdev_nvme_apply_firmware", 00:05:42.491 "bdev_nvme_detach_controller", 00:05:42.491 "bdev_nvme_get_controllers", 00:05:42.491 "bdev_nvme_attach_controller", 00:05:42.491 "bdev_nvme_set_hotplug", 00:05:42.491 "bdev_nvme_set_options", 00:05:42.491 "bdev_passthru_delete", 00:05:42.491 "bdev_passthru_create", 00:05:42.491 "bdev_lvol_set_parent_bdev", 00:05:42.491 "bdev_lvol_set_parent", 00:05:42.491 "bdev_lvol_check_shallow_copy", 00:05:42.491 "bdev_lvol_start_shallow_copy", 00:05:42.491 "bdev_lvol_grow_lvstore", 00:05:42.491 "bdev_lvol_get_lvols", 00:05:42.491 "bdev_lvol_get_lvstores", 00:05:42.491 "bdev_lvol_delete", 00:05:42.491 "bdev_lvol_set_read_only", 00:05:42.491 "bdev_lvol_resize", 00:05:42.491 "bdev_lvol_decouple_parent", 00:05:42.491 "bdev_lvol_inflate", 00:05:42.491 "bdev_lvol_rename", 00:05:42.491 "bdev_lvol_clone_bdev", 00:05:42.491 "bdev_lvol_clone", 00:05:42.491 "bdev_lvol_snapshot", 00:05:42.491 "bdev_lvol_create", 00:05:42.491 "bdev_lvol_delete_lvstore", 00:05:42.491 "bdev_lvol_rename_lvstore", 00:05:42.491 "bdev_lvol_create_lvstore", 00:05:42.491 "bdev_raid_set_options", 00:05:42.491 "bdev_raid_remove_base_bdev", 00:05:42.491 "bdev_raid_add_base_bdev", 00:05:42.491 "bdev_raid_delete", 00:05:42.491 "bdev_raid_create", 00:05:42.491 "bdev_raid_get_bdevs", 00:05:42.491 "bdev_error_inject_error", 00:05:42.491 "bdev_error_delete", 00:05:42.491 "bdev_error_create", 00:05:42.491 "bdev_split_delete", 00:05:42.491 "bdev_split_create", 00:05:42.491 "bdev_delay_delete", 00:05:42.491 "bdev_delay_create", 00:05:42.491 "bdev_delay_update_latency", 00:05:42.491 "bdev_zone_block_delete", 00:05:42.491 "bdev_zone_block_create", 00:05:42.491 "blobfs_create", 00:05:42.491 "blobfs_detect", 00:05:42.491 "blobfs_set_cache_size", 00:05:42.491 "bdev_aio_delete", 00:05:42.491 "bdev_aio_rescan", 00:05:42.491 "bdev_aio_create", 00:05:42.491 "bdev_ftl_set_property", 00:05:42.491 "bdev_ftl_get_properties", 00:05:42.491 "bdev_ftl_get_stats", 00:05:42.491 "bdev_ftl_unmap", 00:05:42.491 "bdev_ftl_unload", 00:05:42.491 "bdev_ftl_delete", 00:05:42.491 "bdev_ftl_load", 00:05:42.491 "bdev_ftl_create", 00:05:42.491 "bdev_virtio_attach_controller", 00:05:42.491 "bdev_virtio_scsi_get_devices", 00:05:42.491 "bdev_virtio_detach_controller", 00:05:42.491 "bdev_virtio_blk_set_hotplug", 00:05:42.491 "bdev_iscsi_delete", 00:05:42.491 "bdev_iscsi_create", 00:05:42.491 "bdev_iscsi_set_options", 00:05:42.491 "accel_error_inject_error", 00:05:42.491 "ioat_scan_accel_module", 00:05:42.491 "dsa_scan_accel_module", 00:05:42.491 "iaa_scan_accel_module", 00:05:42.491 "keyring_file_remove_key", 00:05:42.491 "keyring_file_add_key", 00:05:42.491 "keyring_linux_set_options", 00:05:42.491 "fsdev_aio_delete", 00:05:42.491 "fsdev_aio_create", 00:05:42.491 "iscsi_get_histogram", 00:05:42.491 "iscsi_enable_histogram", 00:05:42.491 "iscsi_set_options", 00:05:42.492 "iscsi_get_auth_groups", 00:05:42.492 "iscsi_auth_group_remove_secret", 00:05:42.492 "iscsi_auth_group_add_secret", 00:05:42.492 "iscsi_delete_auth_group", 00:05:42.492 "iscsi_create_auth_group", 00:05:42.492 "iscsi_set_discovery_auth", 00:05:42.492 "iscsi_get_options", 00:05:42.492 "iscsi_target_node_request_logout", 00:05:42.492 "iscsi_target_node_set_redirect", 00:05:42.492 "iscsi_target_node_set_auth", 00:05:42.492 "iscsi_target_node_add_lun", 00:05:42.492 "iscsi_get_stats", 00:05:42.492 "iscsi_get_connections", 00:05:42.492 "iscsi_portal_group_set_auth", 00:05:42.492 "iscsi_start_portal_group", 00:05:42.492 "iscsi_delete_portal_group", 00:05:42.492 "iscsi_create_portal_group", 00:05:42.492 "iscsi_get_portal_groups", 00:05:42.492 "iscsi_delete_target_node", 00:05:42.492 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.492 "iscsi_target_node_add_pg_ig_maps", 00:05:42.492 "iscsi_create_target_node", 00:05:42.492 "iscsi_get_target_nodes", 00:05:42.492 "iscsi_delete_initiator_group", 00:05:42.492 "iscsi_initiator_group_remove_initiators", 00:05:42.492 "iscsi_initiator_group_add_initiators", 00:05:42.492 "iscsi_create_initiator_group", 00:05:42.492 "iscsi_get_initiator_groups", 00:05:42.492 "nvmf_set_crdt", 00:05:42.492 "nvmf_set_config", 00:05:42.492 "nvmf_set_max_subsystems", 00:05:42.492 "nvmf_stop_mdns_prr", 00:05:42.492 "nvmf_publish_mdns_prr", 00:05:42.492 "nvmf_subsystem_get_listeners", 00:05:42.492 "nvmf_subsystem_get_qpairs", 00:05:42.492 "nvmf_subsystem_get_controllers", 00:05:42.492 "nvmf_get_stats", 00:05:42.492 "nvmf_get_transports", 00:05:42.492 "nvmf_create_transport", 00:05:42.492 "nvmf_get_targets", 00:05:42.492 "nvmf_delete_target", 00:05:42.492 "nvmf_create_target", 00:05:42.492 "nvmf_subsystem_allow_any_host", 00:05:42.492 "nvmf_subsystem_set_keys", 00:05:42.492 "nvmf_subsystem_remove_host", 00:05:42.492 "nvmf_subsystem_add_host", 00:05:42.492 "nvmf_ns_remove_host", 00:05:42.492 "nvmf_ns_add_host", 00:05:42.492 "nvmf_subsystem_remove_ns", 00:05:42.492 "nvmf_subsystem_set_ns_ana_group", 00:05:42.492 "nvmf_subsystem_add_ns", 00:05:42.492 "nvmf_subsystem_listener_set_ana_state", 00:05:42.492 "nvmf_discovery_get_referrals", 00:05:42.492 "nvmf_discovery_remove_referral", 00:05:42.492 "nvmf_discovery_add_referral", 00:05:42.492 "nvmf_subsystem_remove_listener", 00:05:42.492 "nvmf_subsystem_add_listener", 00:05:42.492 "nvmf_delete_subsystem", 00:05:42.492 "nvmf_create_subsystem", 00:05:42.492 "nvmf_get_subsystems", 00:05:42.492 "env_dpdk_get_mem_stats", 00:05:42.492 "nbd_get_disks", 00:05:42.492 "nbd_stop_disk", 00:05:42.492 "nbd_start_disk", 00:05:42.492 "ublk_recover_disk", 00:05:42.492 "ublk_get_disks", 00:05:42.492 "ublk_stop_disk", 00:05:42.492 "ublk_start_disk", 00:05:42.492 "ublk_destroy_target", 00:05:42.492 "ublk_create_target", 00:05:42.492 "virtio_blk_create_transport", 00:05:42.492 "virtio_blk_get_transports", 00:05:42.492 "vhost_controller_set_coalescing", 00:05:42.492 "vhost_get_controllers", 00:05:42.492 "vhost_delete_controller", 00:05:42.492 "vhost_create_blk_controller", 00:05:42.492 "vhost_scsi_controller_remove_target", 00:05:42.492 "vhost_scsi_controller_add_target", 00:05:42.492 "vhost_start_scsi_controller", 00:05:42.492 "vhost_create_scsi_controller", 00:05:42.492 "thread_set_cpumask", 00:05:42.492 "scheduler_set_options", 00:05:42.492 "framework_get_governor", 00:05:42.492 "framework_get_scheduler", 00:05:42.492 "framework_set_scheduler", 00:05:42.492 "framework_get_reactors", 00:05:42.492 "thread_get_io_channels", 00:05:42.492 "thread_get_pollers", 00:05:42.492 "thread_get_stats", 00:05:42.492 "framework_monitor_context_switch", 00:05:42.492 "spdk_kill_instance", 00:05:42.492 "log_enable_timestamps", 00:05:42.492 "log_get_flags", 00:05:42.492 "log_clear_flag", 00:05:42.492 "log_set_flag", 00:05:42.492 "log_get_level", 00:05:42.492 "log_set_level", 00:05:42.492 "log_get_print_level", 00:05:42.492 "log_set_print_level", 00:05:42.492 "framework_enable_cpumask_locks", 00:05:42.492 "framework_disable_cpumask_locks", 00:05:42.492 "framework_wait_init", 00:05:42.492 "framework_start_init", 00:05:42.492 "scsi_get_devices", 00:05:42.492 "bdev_get_histogram", 00:05:42.492 "bdev_enable_histogram", 00:05:42.492 "bdev_set_qos_limit", 00:05:42.492 "bdev_set_qd_sampling_period", 00:05:42.492 "bdev_get_bdevs", 00:05:42.492 "bdev_reset_iostat", 00:05:42.492 "bdev_get_iostat", 00:05:42.492 "bdev_examine", 00:05:42.492 "bdev_wait_for_examine", 00:05:42.492 "bdev_set_options", 00:05:42.492 "accel_get_stats", 00:05:42.492 "accel_set_options", 00:05:42.492 "accel_set_driver", 00:05:42.492 "accel_crypto_key_destroy", 00:05:42.492 "accel_crypto_keys_get", 00:05:42.492 "accel_crypto_key_create", 00:05:42.492 "accel_assign_opc", 00:05:42.492 "accel_get_module_info", 00:05:42.492 "accel_get_opc_assignments", 00:05:42.492 "vmd_rescan", 00:05:42.492 "vmd_remove_device", 00:05:42.492 "vmd_enable", 00:05:42.492 "sock_get_default_impl", 00:05:42.492 "sock_set_default_impl", 00:05:42.492 "sock_impl_set_options", 00:05:42.492 "sock_impl_get_options", 00:05:42.492 "iobuf_get_stats", 00:05:42.492 "iobuf_set_options", 00:05:42.492 "keyring_get_keys", 00:05:42.492 "framework_get_pci_devices", 00:05:42.492 "framework_get_config", 00:05:42.492 "framework_get_subsystems", 00:05:42.492 "fsdev_set_opts", 00:05:42.492 "fsdev_get_opts", 00:05:42.492 "trace_get_info", 00:05:42.492 "trace_get_tpoint_group_mask", 00:05:42.492 "trace_disable_tpoint_group", 00:05:42.492 "trace_enable_tpoint_group", 00:05:42.492 "trace_clear_tpoint_mask", 00:05:42.492 "trace_set_tpoint_mask", 00:05:42.492 "notify_get_notifications", 00:05:42.492 "notify_get_types", 00:05:42.492 "spdk_get_version", 00:05:42.492 "rpc_get_methods" 00:05:42.492 ] 00:05:42.492 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.492 04:22:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.492 04:22:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.753 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.753 04:22:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 114983 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 114983 ']' 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 114983 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114983 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114983' 00:05:42.753 killing process with pid 114983 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 114983 00:05:42.753 04:22:21 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 114983 00:05:43.013 00:05:43.013 real 0m1.168s 00:05:43.013 user 0m1.903s 00:05:43.013 sys 0m0.534s 00:05:43.013 04:22:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.013 04:22:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.013 ************************************ 00:05:43.013 END TEST spdkcli_tcp 00:05:43.013 ************************************ 00:05:43.013 04:22:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.013 04:22:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.013 04:22:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.013 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.013 ************************************ 00:05:43.013 START TEST dpdk_mem_utility 00:05:43.013 ************************************ 00:05:43.013 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.274 * Looking for test storage... 00:05:43.274 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.274 04:22:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.274 --rc genhtml_branch_coverage=1 00:05:43.274 --rc genhtml_function_coverage=1 00:05:43.274 --rc genhtml_legend=1 00:05:43.274 --rc geninfo_all_blocks=1 00:05:43.274 --rc geninfo_unexecuted_blocks=1 00:05:43.274 00:05:43.274 ' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.274 --rc genhtml_branch_coverage=1 00:05:43.274 --rc genhtml_function_coverage=1 00:05:43.274 --rc genhtml_legend=1 00:05:43.274 --rc geninfo_all_blocks=1 00:05:43.274 --rc geninfo_unexecuted_blocks=1 00:05:43.274 00:05:43.274 ' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.274 --rc genhtml_branch_coverage=1 00:05:43.274 --rc genhtml_function_coverage=1 00:05:43.274 --rc genhtml_legend=1 00:05:43.274 --rc geninfo_all_blocks=1 00:05:43.274 --rc geninfo_unexecuted_blocks=1 00:05:43.274 00:05:43.274 ' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.274 --rc genhtml_branch_coverage=1 00:05:43.274 --rc genhtml_function_coverage=1 00:05:43.274 --rc genhtml_legend=1 00:05:43.274 --rc geninfo_all_blocks=1 00:05:43.274 --rc geninfo_unexecuted_blocks=1 00:05:43.274 00:05:43.274 ' 00:05:43.274 04:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.274 04:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=115327 00:05:43.274 04:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 115327 00:05:43.274 04:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 115327 ']' 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.274 04:22:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.274 [2024-11-17 04:22:22.004373] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:43.274 [2024-11-17 04:22:22.004423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115327 ] 00:05:43.274 [2024-11-17 04:22:22.092796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.534 [2024-11-17 04:22:22.114901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.106 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.106 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.106 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.106 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.106 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.106 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.106 { 00:05:44.106 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.106 } 00:05:44.106 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.106 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.106 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:44.106 1 heaps totaling size 810.000000 MiB 00:05:44.106 size: 810.000000 MiB heap id: 0 00:05:44.106 end heaps---------- 00:05:44.106 9 mempools totaling size 595.772034 MiB 00:05:44.106 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.106 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.106 size: 92.545471 MiB name: bdev_io_115327 00:05:44.106 size: 50.003479 MiB name: msgpool_115327 00:05:44.106 size: 36.509338 MiB name: fsdev_io_115327 00:05:44.106 size: 21.763794 MiB name: PDU_Pool 00:05:44.106 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.106 size: 4.133484 MiB name: evtpool_115327 00:05:44.106 size: 0.026123 MiB name: Session_Pool 00:05:44.106 end mempools------- 00:05:44.106 6 memzones totaling size 4.142822 MiB 00:05:44.106 size: 1.000366 MiB name: RG_ring_0_115327 00:05:44.106 size: 1.000366 MiB name: RG_ring_1_115327 00:05:44.106 size: 1.000366 MiB name: RG_ring_4_115327 00:05:44.106 size: 1.000366 MiB name: RG_ring_5_115327 00:05:44.106 size: 0.125366 MiB name: RG_ring_2_115327 00:05:44.106 size: 0.015991 MiB name: RG_ring_3_115327 00:05:44.106 end memzones------- 00:05:44.106 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.367 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:44.367 list of free elements. size: 10.862488 MiB 00:05:44.367 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:44.367 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:44.367 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:44.367 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:44.367 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:44.367 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:44.367 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:44.367 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:44.367 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:44.367 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:44.367 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:44.367 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:44.367 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:44.367 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:44.367 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:44.367 list of standard malloc elements. size: 199.218628 MiB 00:05:44.367 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:44.367 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:44.367 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:44.367 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:44.367 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.367 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.367 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:44.367 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.367 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:44.367 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:44.367 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:44.367 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:44.367 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:44.367 list of memzone associated elements. size: 599.918884 MiB 00:05:44.367 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:44.367 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.367 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:44.367 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.367 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:44.367 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_115327_0 00:05:44.367 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:44.367 associated memzone info: size: 48.002930 MiB name: MP_msgpool_115327_0 00:05:44.367 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:44.367 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_115327_0 00:05:44.367 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:44.367 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.367 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:44.367 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.367 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:44.367 associated memzone info: size: 3.000122 MiB name: MP_evtpool_115327_0 00:05:44.367 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:44.367 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_115327 00:05:44.367 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.367 associated memzone info: size: 1.007996 MiB name: MP_evtpool_115327 00:05:44.367 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:44.367 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.367 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:44.367 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.368 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:44.368 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.368 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:44.368 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.368 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:44.368 associated memzone info: size: 1.000366 MiB name: RG_ring_0_115327 00:05:44.368 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:44.368 associated memzone info: size: 1.000366 MiB name: RG_ring_1_115327 00:05:44.368 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:44.368 associated memzone info: size: 1.000366 MiB name: RG_ring_4_115327 00:05:44.368 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:44.368 associated memzone info: size: 1.000366 MiB name: RG_ring_5_115327 00:05:44.368 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:44.368 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_115327 00:05:44.368 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:44.368 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_115327 00:05:44.368 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:44.368 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.368 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:44.368 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.368 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:44.368 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.368 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:44.368 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_115327 00:05:44.368 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:44.368 associated memzone info: size: 0.125366 MiB name: RG_ring_2_115327 00:05:44.368 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:44.368 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.368 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:44.368 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.368 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:44.368 associated memzone info: size: 0.015991 MiB name: RG_ring_3_115327 00:05:44.368 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:44.368 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.368 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:44.368 associated memzone info: size: 0.000183 MiB name: MP_msgpool_115327 00:05:44.368 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:44.368 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_115327 00:05:44.368 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:44.368 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_115327 00:05:44.368 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:44.368 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.368 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.368 04:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 115327 00:05:44.368 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 115327 ']' 00:05:44.368 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 115327 00:05:44.368 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:44.368 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.368 04:22:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115327 00:05:44.368 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.368 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.368 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115327' 00:05:44.368 killing process with pid 115327 00:05:44.368 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 115327 00:05:44.368 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 115327 00:05:44.629 00:05:44.629 real 0m1.549s 00:05:44.629 user 0m1.569s 00:05:44.629 sys 0m0.522s 00:05:44.629 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.629 04:22:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.629 ************************************ 00:05:44.629 END TEST dpdk_mem_utility 00:05:44.629 ************************************ 00:05:44.629 04:22:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:44.629 04:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.629 04:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.629 04:22:23 -- common/autotest_common.sh@10 -- # set +x 00:05:44.629 ************************************ 00:05:44.629 START TEST event 00:05:44.629 ************************************ 00:05:44.629 04:22:23 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:44.890 * Looking for test storage... 00:05:44.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.890 04:22:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.890 04:22:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.890 04:22:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.890 04:22:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.890 04:22:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.890 04:22:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.890 04:22:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.890 04:22:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.890 04:22:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.890 04:22:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.890 04:22:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.890 04:22:23 event -- scripts/common.sh@344 -- # case "$op" in 00:05:44.890 04:22:23 event -- scripts/common.sh@345 -- # : 1 00:05:44.890 04:22:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.890 04:22:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.890 04:22:23 event -- scripts/common.sh@365 -- # decimal 1 00:05:44.890 04:22:23 event -- scripts/common.sh@353 -- # local d=1 00:05:44.890 04:22:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.890 04:22:23 event -- scripts/common.sh@355 -- # echo 1 00:05:44.890 04:22:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.890 04:22:23 event -- scripts/common.sh@366 -- # decimal 2 00:05:44.890 04:22:23 event -- scripts/common.sh@353 -- # local d=2 00:05:44.890 04:22:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.890 04:22:23 event -- scripts/common.sh@355 -- # echo 2 00:05:44.890 04:22:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.890 04:22:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.890 04:22:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.890 04:22:23 event -- scripts/common.sh@368 -- # return 0 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.890 --rc genhtml_branch_coverage=1 00:05:44.890 --rc genhtml_function_coverage=1 00:05:44.890 --rc genhtml_legend=1 00:05:44.890 --rc geninfo_all_blocks=1 00:05:44.890 --rc geninfo_unexecuted_blocks=1 00:05:44.890 00:05:44.890 ' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.890 --rc genhtml_branch_coverage=1 00:05:44.890 --rc genhtml_function_coverage=1 00:05:44.890 --rc genhtml_legend=1 00:05:44.890 --rc geninfo_all_blocks=1 00:05:44.890 --rc geninfo_unexecuted_blocks=1 00:05:44.890 00:05:44.890 ' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.890 --rc genhtml_branch_coverage=1 00:05:44.890 --rc genhtml_function_coverage=1 00:05:44.890 --rc genhtml_legend=1 00:05:44.890 --rc geninfo_all_blocks=1 00:05:44.890 --rc geninfo_unexecuted_blocks=1 00:05:44.890 00:05:44.890 ' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.890 --rc genhtml_branch_coverage=1 00:05:44.890 --rc genhtml_function_coverage=1 00:05:44.890 --rc genhtml_legend=1 00:05:44.890 --rc geninfo_all_blocks=1 00:05:44.890 --rc geninfo_unexecuted_blocks=1 00:05:44.890 00:05:44.890 ' 00:05:44.890 04:22:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.890 04:22:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.890 04:22:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:44.890 04:22:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.890 04:22:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.890 ************************************ 00:05:44.890 START TEST event_perf 00:05:44.890 ************************************ 00:05:44.890 04:22:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.890 Running I/O for 1 seconds...[2024-11-17 04:22:23.647485] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:44.890 [2024-11-17 04:22:23.647548] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115653 ] 00:05:45.150 [2024-11-17 04:22:23.739122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.151 [2024-11-17 04:22:23.765876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.151 [2024-11-17 04:22:23.765951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.151 [2024-11-17 04:22:23.766021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.151 [2024-11-17 04:22:23.766022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.091 Running I/O for 1 seconds... 00:05:46.091 lcore 0: 202112 00:05:46.091 lcore 1: 202111 00:05:46.091 lcore 2: 202112 00:05:46.091 lcore 3: 202113 00:05:46.091 done. 00:05:46.091 00:05:46.091 real 0m1.177s 00:05:46.091 user 0m4.079s 00:05:46.091 sys 0m0.095s 00:05:46.091 04:22:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.091 04:22:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.091 ************************************ 00:05:46.091 END TEST event_perf 00:05:46.091 ************************************ 00:05:46.091 04:22:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.091 04:22:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.091 04:22:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.091 04:22:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.091 ************************************ 00:05:46.091 START TEST event_reactor 00:05:46.091 ************************************ 00:05:46.092 04:22:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.092 [2024-11-17 04:22:24.911745] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:46.092 [2024-11-17 04:22:24.911825] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115938 ] 00:05:46.352 [2024-11-17 04:22:25.007711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.352 [2024-11-17 04:22:25.030321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.292 test_start 00:05:47.292 oneshot 00:05:47.292 tick 100 00:05:47.292 tick 100 00:05:47.292 tick 250 00:05:47.292 tick 100 00:05:47.292 tick 100 00:05:47.292 tick 250 00:05:47.292 tick 100 00:05:47.292 tick 500 00:05:47.292 tick 100 00:05:47.292 tick 100 00:05:47.292 tick 250 00:05:47.292 tick 100 00:05:47.292 tick 100 00:05:47.292 test_end 00:05:47.292 00:05:47.292 real 0m1.173s 00:05:47.292 user 0m1.070s 00:05:47.292 sys 0m0.098s 00:05:47.292 04:22:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.292 04:22:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.292 ************************************ 00:05:47.292 END TEST event_reactor 00:05:47.292 ************************************ 00:05:47.292 04:22:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.292 04:22:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.292 04:22:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.292 04:22:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.552 ************************************ 00:05:47.552 START TEST event_reactor_perf 00:05:47.552 ************************************ 00:05:47.552 04:22:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.552 [2024-11-17 04:22:26.169982] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:47.552 [2024-11-17 04:22:26.170063] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116230 ] 00:05:47.552 [2024-11-17 04:22:26.265362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.552 [2024-11-17 04:22:26.286410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.512 test_start 00:05:48.512 test_end 00:05:48.512 Performance: 528166 events per second 00:05:48.512 00:05:48.512 real 0m1.169s 00:05:48.512 user 0m1.069s 00:05:48.512 sys 0m0.096s 00:05:48.512 04:22:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.512 04:22:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.512 ************************************ 00:05:48.512 END TEST event_reactor_perf 00:05:48.512 ************************************ 00:05:48.773 04:22:27 event -- event/event.sh@49 -- # uname -s 00:05:48.773 04:22:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.773 04:22:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.773 04:22:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.773 04:22:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.773 04:22:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.773 ************************************ 00:05:48.773 START TEST event_scheduler 00:05:48.773 ************************************ 00:05:48.773 04:22:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.773 * Looking for test storage... 00:05:48.773 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:48.773 04:22:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.773 04:22:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.773 04:22:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.774 04:22:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.774 04:22:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:48.774 04:22:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.774 04:22:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.774 --rc genhtml_branch_coverage=1 00:05:48.774 --rc genhtml_function_coverage=1 00:05:48.774 --rc genhtml_legend=1 00:05:48.774 --rc geninfo_all_blocks=1 00:05:48.774 --rc geninfo_unexecuted_blocks=1 00:05:48.774 00:05:48.775 ' 00:05:48.775 04:22:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.775 --rc genhtml_branch_coverage=1 00:05:48.775 --rc genhtml_function_coverage=1 00:05:48.775 --rc genhtml_legend=1 00:05:48.775 --rc geninfo_all_blocks=1 00:05:48.775 --rc geninfo_unexecuted_blocks=1 00:05:48.775 00:05:48.775 ' 00:05:48.775 04:22:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.775 --rc genhtml_branch_coverage=1 00:05:48.775 --rc genhtml_function_coverage=1 00:05:48.775 --rc genhtml_legend=1 00:05:48.775 --rc geninfo_all_blocks=1 00:05:48.775 --rc geninfo_unexecuted_blocks=1 00:05:48.775 00:05:48.775 ' 00:05:48.775 04:22:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.776 --rc genhtml_branch_coverage=1 00:05:48.776 --rc genhtml_function_coverage=1 00:05:48.776 --rc genhtml_legend=1 00:05:48.776 --rc geninfo_all_blocks=1 00:05:48.776 --rc geninfo_unexecuted_blocks=1 00:05:48.776 00:05:48.776 ' 00:05:48.776 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.776 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=116550 00:05:48.776 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.776 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.776 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 116550 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 116550 ']' 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.776 04:22:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.040 [2024-11-17 04:22:27.641780] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:49.040 [2024-11-17 04:22:27.641832] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116550 ] 00:05:49.040 [2024-11-17 04:22:27.731682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.040 [2024-11-17 04:22:27.757253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.040 [2024-11-17 04:22:27.757365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.040 [2024-11-17 04:22:27.757450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.040 [2024-11-17 04:22:27.757451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:49.040 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.040 [2024-11-17 04:22:27.806151] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.040 [2024-11-17 04:22:27.806170] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.040 [2024-11-17 04:22:27.806181] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.040 [2024-11-17 04:22:27.806188] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.040 [2024-11-17 04:22:27.806196] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.040 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.040 04:22:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 [2024-11-17 04:22:27.879150] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.300 04:22:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.300 04:22:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.300 04:22:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.300 04:22:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.300 04:22:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 ************************************ 00:05:49.300 START TEST scheduler_create_thread 00:05:49.300 ************************************ 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 2 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 3 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.300 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 4 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 5 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 6 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 7 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 8 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 9 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 10 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.301 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.872 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.872 04:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.872 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.872 04:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.255 04:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.255 04:22:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.255 04:22:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.256 04:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.256 04:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.196 04:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.196 00:05:52.196 real 0m3.100s 00:05:52.196 user 0m0.026s 00:05:52.196 sys 0m0.005s 00:05:52.196 04:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.196 04:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.196 ************************************ 00:05:52.196 END TEST scheduler_create_thread 00:05:52.196 ************************************ 00:05:52.456 04:22:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.456 04:22:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 116550 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 116550 ']' 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 116550 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116550 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116550' 00:05:52.456 killing process with pid 116550 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 116550 00:05:52.456 04:22:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 116550 00:05:52.715 [2024-11-17 04:22:31.402445] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.975 00:05:52.975 real 0m4.183s 00:05:52.975 user 0m6.647s 00:05:52.975 sys 0m0.461s 00:05:52.975 04:22:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.975 04:22:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.975 ************************************ 00:05:52.975 END TEST event_scheduler 00:05:52.975 ************************************ 00:05:52.975 04:22:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.975 04:22:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.975 04:22:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.975 04:22:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.975 04:22:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.975 ************************************ 00:05:52.975 START TEST app_repeat 00:05:52.975 ************************************ 00:05:52.975 04:22:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=117169 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 117169' 00:05:52.975 Process app_repeat pid: 117169 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.975 spdk_app_start Round 0 00:05:52.975 04:22:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117169 /var/tmp/spdk-nbd.sock 00:05:52.975 04:22:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117169 ']' 00:05:52.975 04:22:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.976 04:22:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.976 04:22:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.976 04:22:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.976 04:22:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.976 [2024-11-17 04:22:31.723044] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:52.976 [2024-11-17 04:22:31.723121] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117169 ] 00:05:53.236 [2024-11-17 04:22:31.814817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.236 [2024-11-17 04:22:31.839854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.236 [2024-11-17 04:22:31.839854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.236 04:22:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.236 04:22:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.236 04:22:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.496 Malloc0 00:05:53.496 04:22:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.496 Malloc1 00:05:53.496 04:22:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.496 04:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.757 /dev/nbd0 00:05:53.757 04:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.757 04:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.757 1+0 records in 00:05:53.757 1+0 records out 00:05:53.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257431 s, 15.9 MB/s 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.757 04:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.757 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.757 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.757 04:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.018 /dev/nbd1 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.018 1+0 records in 00:05:54.018 1+0 records out 00:05:54.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263401 s, 15.6 MB/s 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.018 04:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.018 04:22:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.278 { 00:05:54.278 "nbd_device": "/dev/nbd0", 00:05:54.278 "bdev_name": "Malloc0" 00:05:54.278 }, 00:05:54.278 { 00:05:54.278 "nbd_device": "/dev/nbd1", 00:05:54.278 "bdev_name": "Malloc1" 00:05:54.278 } 00:05:54.278 ]' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.278 { 00:05:54.278 "nbd_device": "/dev/nbd0", 00:05:54.278 "bdev_name": "Malloc0" 00:05:54.278 }, 00:05:54.278 { 00:05:54.278 "nbd_device": "/dev/nbd1", 00:05:54.278 "bdev_name": "Malloc1" 00:05:54.278 } 00:05:54.278 ]' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.278 /dev/nbd1' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.278 /dev/nbd1' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.278 04:22:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.279 04:22:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.279 04:22:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.279 04:22:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.279 256+0 records in 00:05:54.279 256+0 records out 00:05:54.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101924 s, 103 MB/s 00:05:54.279 04:22:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.279 04:22:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.539 256+0 records in 00:05:54.539 256+0 records out 00:05:54.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193963 s, 54.1 MB/s 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.539 256+0 records in 00:05:54.539 256+0 records out 00:05:54.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205264 s, 51.1 MB/s 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.539 04:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.800 04:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.059 04:22:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.059 04:22:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.320 04:22:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.580 [2024-11-17 04:22:34.208685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.580 [2024-11-17 04:22:34.228268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.580 [2024-11-17 04:22:34.228269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.580 [2024-11-17 04:22:34.268827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.580 [2024-11-17 04:22:34.268868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.877 spdk_app_start Round 1 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117169 /var/tmp/spdk-nbd.sock 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117169 ']' 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.877 04:22:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.877 Malloc0 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.877 Malloc1 00:05:58.877 04:22:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.877 04:22:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.138 /dev/nbd0 00:05:59.138 04:22:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.138 04:22:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.138 1+0 records in 00:05:59.138 1+0 records out 00:05:59.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175254 s, 23.4 MB/s 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.138 04:22:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.138 04:22:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.138 04:22:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.138 04:22:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.398 /dev/nbd1 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.398 1+0 records in 00:05:59.398 1+0 records out 00:05:59.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240978 s, 17.0 MB/s 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.398 04:22:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.398 04:22:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.659 { 00:05:59.659 "nbd_device": "/dev/nbd0", 00:05:59.659 "bdev_name": "Malloc0" 00:05:59.659 }, 00:05:59.659 { 00:05:59.659 "nbd_device": "/dev/nbd1", 00:05:59.659 "bdev_name": "Malloc1" 00:05:59.659 } 00:05:59.659 ]' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.659 { 00:05:59.659 "nbd_device": "/dev/nbd0", 00:05:59.659 "bdev_name": "Malloc0" 00:05:59.659 }, 00:05:59.659 { 00:05:59.659 "nbd_device": "/dev/nbd1", 00:05:59.659 "bdev_name": "Malloc1" 00:05:59.659 } 00:05:59.659 ]' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.659 /dev/nbd1' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.659 /dev/nbd1' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.659 256+0 records in 00:05:59.659 256+0 records out 00:05:59.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108777 s, 96.4 MB/s 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.659 256+0 records in 00:05:59.659 256+0 records out 00:05:59.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01933 s, 54.2 MB/s 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.659 04:22:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.920 256+0 records in 00:05:59.920 256+0 records out 00:05:59.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204884 s, 51.2 MB/s 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.920 04:22:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.181 04:22:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.441 04:22:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.441 04:22:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.704 04:22:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.964 [2024-11-17 04:22:39.552811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.964 [2024-11-17 04:22:39.572532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.964 [2024-11-17 04:22:39.572532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.964 [2024-11-17 04:22:39.613984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.964 [2024-11-17 04:22:39.614025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.263 04:22:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.263 04:22:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.263 spdk_app_start Round 2 00:06:04.263 04:22:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117169 /var/tmp/spdk-nbd.sock 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117169 ']' 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.263 04:22:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.263 04:22:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.263 Malloc0 00:06:04.263 04:22:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.263 Malloc1 00:06:04.263 04:22:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.263 04:22:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.523 /dev/nbd0 00:06:04.523 04:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.523 04:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.523 1+0 records in 00:06:04.523 1+0 records out 00:06:04.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225529 s, 18.2 MB/s 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.523 04:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.523 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.523 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.523 04:22:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.784 /dev/nbd1 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.784 1+0 records in 00:06:04.784 1+0 records out 00:06:04.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264163 s, 15.5 MB/s 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.784 04:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.784 04:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.044 { 00:06:05.044 "nbd_device": "/dev/nbd0", 00:06:05.044 "bdev_name": "Malloc0" 00:06:05.044 }, 00:06:05.044 { 00:06:05.044 "nbd_device": "/dev/nbd1", 00:06:05.044 "bdev_name": "Malloc1" 00:06:05.044 } 00:06:05.044 ]' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.044 { 00:06:05.044 "nbd_device": "/dev/nbd0", 00:06:05.044 "bdev_name": "Malloc0" 00:06:05.044 }, 00:06:05.044 { 00:06:05.044 "nbd_device": "/dev/nbd1", 00:06:05.044 "bdev_name": "Malloc1" 00:06:05.044 } 00:06:05.044 ]' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.044 /dev/nbd1' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.044 /dev/nbd1' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.044 256+0 records in 00:06:05.044 256+0 records out 00:06:05.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115623 s, 90.7 MB/s 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.044 256+0 records in 00:06:05.044 256+0 records out 00:06:05.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019664 s, 53.3 MB/s 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.044 256+0 records in 00:06:05.044 256+0 records out 00:06:05.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205266 s, 51.1 MB/s 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.044 04:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.305 04:22:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.305 04:22:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.565 04:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.825 04:22:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.825 04:22:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.086 04:22:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.346 [2024-11-17 04:22:44.945482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.346 [2024-11-17 04:22:44.965143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.346 [2024-11-17 04:22:44.965144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.346 [2024-11-17 04:22:45.005900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.346 [2024-11-17 04:22:45.005941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.645 04:22:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 117169 /var/tmp/spdk-nbd.sock 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117169 ']' 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.645 04:22:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.645 04:22:48 event.app_repeat -- event/event.sh@39 -- # killprocess 117169 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 117169 ']' 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 117169 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117169 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117169' 00:06:09.645 killing process with pid 117169 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 117169 00:06:09.645 04:22:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 117169 00:06:09.645 spdk_app_start is called in Round 0. 00:06:09.645 Shutdown signal received, stop current app iteration 00:06:09.645 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:09.646 spdk_app_start is called in Round 1. 00:06:09.646 Shutdown signal received, stop current app iteration 00:06:09.646 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:09.646 spdk_app_start is called in Round 2. 00:06:09.646 Shutdown signal received, stop current app iteration 00:06:09.646 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:09.646 spdk_app_start is called in Round 3. 00:06:09.646 Shutdown signal received, stop current app iteration 00:06:09.646 04:22:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.646 04:22:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.646 00:06:09.646 real 0m16.531s 00:06:09.646 user 0m36.012s 00:06:09.646 sys 0m3.029s 00:06:09.646 04:22:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.646 04:22:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.646 ************************************ 00:06:09.646 END TEST app_repeat 00:06:09.646 ************************************ 00:06:09.646 04:22:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.646 04:22:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.646 04:22:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.646 04:22:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.646 04:22:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.646 ************************************ 00:06:09.646 START TEST cpu_locks 00:06:09.646 ************************************ 00:06:09.646 04:22:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.646 * Looking for test storage... 00:06:09.646 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:09.646 04:22:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.646 04:22:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.646 04:22:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.906 04:22:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.906 04:22:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:09.906 04:22:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.906 04:22:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.907 --rc genhtml_branch_coverage=1 00:06:09.907 --rc genhtml_function_coverage=1 00:06:09.907 --rc genhtml_legend=1 00:06:09.907 --rc geninfo_all_blocks=1 00:06:09.907 --rc geninfo_unexecuted_blocks=1 00:06:09.907 00:06:09.907 ' 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.907 --rc genhtml_branch_coverage=1 00:06:09.907 --rc genhtml_function_coverage=1 00:06:09.907 --rc genhtml_legend=1 00:06:09.907 --rc geninfo_all_blocks=1 00:06:09.907 --rc geninfo_unexecuted_blocks=1 00:06:09.907 00:06:09.907 ' 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.907 --rc genhtml_branch_coverage=1 00:06:09.907 --rc genhtml_function_coverage=1 00:06:09.907 --rc genhtml_legend=1 00:06:09.907 --rc geninfo_all_blocks=1 00:06:09.907 --rc geninfo_unexecuted_blocks=1 00:06:09.907 00:06:09.907 ' 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.907 --rc genhtml_branch_coverage=1 00:06:09.907 --rc genhtml_function_coverage=1 00:06:09.907 --rc genhtml_legend=1 00:06:09.907 --rc geninfo_all_blocks=1 00:06:09.907 --rc geninfo_unexecuted_blocks=1 00:06:09.907 00:06:09.907 ' 00:06:09.907 04:22:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.907 04:22:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.907 04:22:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.907 04:22:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.907 04:22:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.907 ************************************ 00:06:09.907 START TEST default_locks 00:06:09.907 ************************************ 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=120309 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 120309 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 120309 ']' 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.907 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.907 [2024-11-17 04:22:48.579634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:09.907 [2024-11-17 04:22:48.579681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120309 ] 00:06:09.907 [2024-11-17 04:22:48.670602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.907 [2024-11-17 04:22:48.693465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.167 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.167 04:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:10.167 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 120309 00:06:10.167 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 120309 00:06:10.167 04:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.737 lslocks: write error 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 120309 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 120309 ']' 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 120309 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120309 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120309' 00:06:10.737 killing process with pid 120309 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 120309 00:06:10.737 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 120309 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 120309 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 120309 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 120309 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 120309 ']' 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (120309) - No such process 00:06:10.997 ERROR: process (pid: 120309) is no longer running 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.997 00:06:10.997 real 0m1.109s 00:06:10.997 user 0m1.062s 00:06:10.997 sys 0m0.536s 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.997 04:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 ************************************ 00:06:10.997 END TEST default_locks 00:06:10.997 ************************************ 00:06:10.997 04:22:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.997 04:22:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.997 04:22:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.997 04:22:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 ************************************ 00:06:10.997 START TEST default_locks_via_rpc 00:06:10.997 ************************************ 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=120605 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 120605 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 120605 ']' 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.997 04:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 [2024-11-17 04:22:49.763693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:10.997 [2024-11-17 04:22:49.763741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120605 ] 00:06:11.257 [2024-11-17 04:22:49.853363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.257 [2024-11-17 04:22:49.875739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.257 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.257 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.257 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.257 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.257 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.258 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.517 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.517 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 120605 00:06:11.517 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 120605 00:06:11.517 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 120605 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 120605 ']' 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 120605 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120605 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120605' 00:06:11.777 killing process with pid 120605 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 120605 00:06:11.777 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 120605 00:06:12.037 00:06:12.037 real 0m1.073s 00:06:12.037 user 0m1.026s 00:06:12.037 sys 0m0.525s 00:06:12.037 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.037 04:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.037 ************************************ 00:06:12.037 END TEST default_locks_via_rpc 00:06:12.037 ************************************ 00:06:12.037 04:22:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.037 04:22:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.037 04:22:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.037 04:22:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.037 ************************************ 00:06:12.037 START TEST non_locking_app_on_locked_coremask 00:06:12.037 ************************************ 00:06:12.037 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=120893 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 120893 /var/tmp/spdk.sock 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120893 ']' 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.297 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.298 [2024-11-17 04:22:50.917751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:12.298 [2024-11-17 04:22:50.917795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120893 ] 00:06:12.298 [2024-11-17 04:22:51.004141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.298 [2024-11-17 04:22:51.025547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=120900 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 120900 /var/tmp/spdk2.sock 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120900 ']' 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.558 04:22:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 [2024-11-17 04:22:51.291355] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:12.558 [2024-11-17 04:22:51.291414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120900 ] 00:06:12.819 [2024-11-17 04:22:51.401810] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.819 [2024-11-17 04:22:51.401837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.819 [2024-11-17 04:22:51.448184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.389 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.389 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.389 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 120893 00:06:13.389 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 120893 00:06:13.389 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.958 lslocks: write error 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 120893 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120893 ']' 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120893 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120893 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120893' 00:06:13.958 killing process with pid 120893 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120893 00:06:13.958 04:22:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120893 00:06:14.528 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 120900 00:06:14.528 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120900 ']' 00:06:14.528 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120900 00:06:14.528 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.528 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120900 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120900' 00:06:14.529 killing process with pid 120900 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120900 00:06:14.529 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120900 00:06:14.789 00:06:14.789 real 0m2.733s 00:06:14.789 user 0m2.860s 00:06:14.789 sys 0m1.013s 00:06:14.789 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.789 04:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.789 ************************************ 00:06:14.789 END TEST non_locking_app_on_locked_coremask 00:06:14.789 ************************************ 00:06:15.049 04:22:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.049 04:22:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.049 04:22:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.049 04:22:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.049 ************************************ 00:06:15.049 START TEST locking_app_on_unlocked_coremask 00:06:15.049 ************************************ 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=121409 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 121409 /var/tmp/spdk.sock 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 121409 ']' 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.049 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.049 [2024-11-17 04:22:53.732347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:15.049 [2024-11-17 04:22:53.732395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121409 ] 00:06:15.049 [2024-11-17 04:22:53.804545] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.049 [2024-11-17 04:22:53.804569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.049 [2024-11-17 04:22:53.826341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=121471 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 121471 /var/tmp/spdk2.sock 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 121471 ']' 00:06:15.309 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.310 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.310 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.310 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.310 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 [2024-11-17 04:22:54.072331] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:15.310 [2024-11-17 04:22:54.072382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121471 ] 00:06:15.569 [2024-11-17 04:22:54.177855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.569 [2024-11-17 04:22:54.219860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.139 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.140 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.140 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 121471 00:06:16.140 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 121471 00:06:16.140 04:22:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.521 lslocks: write error 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 121409 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 121409 ']' 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 121409 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121409 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121409' 00:06:17.521 killing process with pid 121409 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 121409 00:06:17.521 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 121409 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 121471 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 121471 ']' 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 121471 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121471 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121471' 00:06:18.091 killing process with pid 121471 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 121471 00:06:18.091 04:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 121471 00:06:18.351 00:06:18.351 real 0m3.416s 00:06:18.351 user 0m3.610s 00:06:18.351 sys 0m1.307s 00:06:18.351 04:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.351 04:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.351 ************************************ 00:06:18.351 END TEST locking_app_on_unlocked_coremask 00:06:18.351 ************************************ 00:06:18.351 04:22:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.351 04:22:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.351 04:22:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.351 04:22:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.351 ************************************ 00:06:18.351 START TEST locking_app_on_locked_coremask 00:06:18.351 ************************************ 00:06:18.352 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=122042 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 122042 /var/tmp/spdk.sock 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 122042 ']' 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.612 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.612 [2024-11-17 04:22:57.231605] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:18.612 [2024-11-17 04:22:57.231650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122042 ] 00:06:18.612 [2024-11-17 04:22:57.320659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.612 [2024-11-17 04:22:57.342093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=122048 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 122048 /var/tmp/spdk2.sock 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 122048 /var/tmp/spdk2.sock 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 122048 /var/tmp/spdk2.sock 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 122048 ']' 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.872 04:22:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.872 [2024-11-17 04:22:57.593229] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:18.873 [2024-11-17 04:22:57.593276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122048 ] 00:06:18.873 [2024-11-17 04:22:57.699615] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 122042 has claimed it. 00:06:18.873 [2024-11-17 04:22:57.699658] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.443 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (122048) - No such process 00:06:19.443 ERROR: process (pid: 122048) is no longer running 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 122042 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 122042 00:06:19.443 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.383 lslocks: write error 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 122042 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 122042 ']' 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 122042 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122042 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122042' 00:06:20.383 killing process with pid 122042 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 122042 00:06:20.383 04:22:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 122042 00:06:20.643 00:06:20.643 real 0m2.084s 00:06:20.643 user 0m2.252s 00:06:20.643 sys 0m0.762s 00:06:20.643 04:22:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.643 04:22:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 ************************************ 00:06:20.643 END TEST locking_app_on_locked_coremask 00:06:20.643 ************************************ 00:06:20.643 04:22:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.643 04:22:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.643 04:22:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.643 04:22:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 ************************************ 00:06:20.643 START TEST locking_overlapped_coremask 00:06:20.643 ************************************ 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=122346 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 122346 /var/tmp/spdk.sock 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 122346 ']' 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.643 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 [2024-11-17 04:22:59.404769] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:20.643 [2024-11-17 04:22:59.404821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122346 ] 00:06:20.904 [2024-11-17 04:22:59.494565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.904 [2024-11-17 04:22:59.519691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.904 [2024-11-17 04:22:59.519788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.904 [2024-11-17 04:22:59.519789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=122478 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 122478 /var/tmp/spdk2.sock 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 122478 /var/tmp/spdk2.sock 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 122478 /var/tmp/spdk2.sock 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 122478 ']' 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.904 04:22:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.164 [2024-11-17 04:22:59.770682] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:21.164 [2024-11-17 04:22:59.770737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122478 ] 00:06:21.164 [2024-11-17 04:22:59.880842] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 122346 has claimed it. 00:06:21.164 [2024-11-17 04:22:59.880887] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (122478) - No such process 00:06:21.735 ERROR: process (pid: 122478) is no longer running 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 122346 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 122346 ']' 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 122346 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122346 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122346' 00:06:21.735 killing process with pid 122346 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 122346 00:06:21.735 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 122346 00:06:21.995 00:06:21.995 real 0m1.433s 00:06:21.995 user 0m3.910s 00:06:21.995 sys 0m0.463s 00:06:21.995 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.995 04:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.995 ************************************ 00:06:21.995 END TEST locking_overlapped_coremask 00:06:21.995 ************************************ 00:06:22.256 04:23:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.256 04:23:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.256 04:23:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.256 04:23:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.256 ************************************ 00:06:22.256 START TEST locking_overlapped_coremask_via_rpc 00:06:22.256 ************************************ 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=122650 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 122650 /var/tmp/spdk.sock 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122650 ']' 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.256 04:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.256 [2024-11-17 04:23:00.914376] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:22.256 [2024-11-17 04:23:00.914418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122650 ] 00:06:22.256 [2024-11-17 04:23:01.005965] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.256 [2024-11-17 04:23:01.005990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.256 [2024-11-17 04:23:01.030928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.256 [2024-11-17 04:23:01.031041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.256 [2024-11-17 04:23:01.031041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=122757 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 122757 /var/tmp/spdk2.sock 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122757 ']' 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.516 04:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.516 [2024-11-17 04:23:01.278682] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:22.516 [2024-11-17 04:23:01.278740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122757 ] 00:06:22.777 [2024-11-17 04:23:01.389507] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.777 [2024-11-17 04:23:01.389537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.777 [2024-11-17 04:23:01.442995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.777 [2024-11-17 04:23:01.446978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.777 [2024-11-17 04:23:01.446979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.346 [2024-11-17 04:23:02.122003] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 122650 has claimed it. 00:06:23.346 request: 00:06:23.346 { 00:06:23.346 "method": "framework_enable_cpumask_locks", 00:06:23.346 "req_id": 1 00:06:23.346 } 00:06:23.346 Got JSON-RPC error response 00:06:23.346 response: 00:06:23.346 { 00:06:23.346 "code": -32603, 00:06:23.346 "message": "Failed to claim CPU core: 2" 00:06:23.346 } 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 122650 /var/tmp/spdk.sock 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122650 ']' 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.346 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 122757 /var/tmp/spdk2.sock 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122757 ']' 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.606 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.866 00:06:23.866 real 0m1.677s 00:06:23.866 user 0m0.774s 00:06:23.866 sys 0m0.172s 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.866 04:23:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.866 ************************************ 00:06:23.866 END TEST locking_overlapped_coremask_via_rpc 00:06:23.866 ************************************ 00:06:23.866 04:23:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.866 04:23:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 122650 ]] 00:06:23.866 04:23:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 122650 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122650 ']' 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122650 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122650 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122650' 00:06:23.866 killing process with pid 122650 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 122650 00:06:23.866 04:23:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 122650 00:06:24.127 04:23:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 122757 ]] 00:06:24.127 04:23:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 122757 00:06:24.127 04:23:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122757 ']' 00:06:24.127 04:23:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122757 00:06:24.127 04:23:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.387 04:23:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.387 04:23:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122757 00:06:24.387 04:23:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:24.387 04:23:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:24.387 04:23:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122757' 00:06:24.387 killing process with pid 122757 00:06:24.387 04:23:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 122757 00:06:24.387 04:23:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 122757 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 122650 ]] 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 122650 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122650 ']' 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122650 00:06:24.649 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (122650) - No such process 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 122650 is not found' 00:06:24.649 Process with pid 122650 is not found 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 122757 ]] 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 122757 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122757 ']' 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122757 00:06:24.649 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (122757) - No such process 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 122757 is not found' 00:06:24.649 Process with pid 122757 is not found 00:06:24.649 04:23:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.649 00:06:24.649 real 0m15.027s 00:06:24.649 user 0m25.233s 00:06:24.649 sys 0m5.912s 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.649 04:23:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.649 ************************************ 00:06:24.649 END TEST cpu_locks 00:06:24.649 ************************************ 00:06:24.649 00:06:24.649 real 0m39.978s 00:06:24.649 user 1m14.385s 00:06:24.649 sys 0m10.188s 00:06:24.649 04:23:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.649 04:23:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.649 ************************************ 00:06:24.649 END TEST event 00:06:24.649 ************************************ 00:06:24.649 04:23:03 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:24.649 04:23:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.649 04:23:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.649 04:23:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.649 ************************************ 00:06:24.649 START TEST thread 00:06:24.649 ************************************ 00:06:24.649 04:23:03 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:24.910 * Looking for test storage... 00:06:24.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.910 04:23:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.910 04:23:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.910 04:23:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.910 04:23:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.910 04:23:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.910 04:23:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.910 04:23:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.910 04:23:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.910 04:23:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.910 04:23:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.910 04:23:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.910 04:23:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.910 04:23:03 thread -- scripts/common.sh@345 -- # : 1 00:06:24.910 04:23:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.910 04:23:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.910 04:23:03 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.910 04:23:03 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.910 04:23:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.910 04:23:03 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.910 04:23:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.910 04:23:03 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.910 04:23:03 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.910 04:23:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.910 04:23:03 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.910 04:23:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.910 04:23:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.910 04:23:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.910 04:23:03 thread -- scripts/common.sh@368 -- # return 0 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 04:23:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.910 04:23:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.910 ************************************ 00:06:24.910 START TEST thread_poller_perf 00:06:24.910 ************************************ 00:06:24.910 04:23:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.910 [2024-11-17 04:23:03.715030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:24.910 [2024-11-17 04:23:03.715106] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123289 ] 00:06:25.170 [2024-11-17 04:23:03.808663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.170 [2024-11-17 04:23:03.830601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.170 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.112 [2024-11-17T03:23:04.942Z] ====================================== 00:06:26.112 [2024-11-17T03:23:04.942Z] busy:2505320188 (cyc) 00:06:26.112 [2024-11-17T03:23:04.942Z] total_run_count: 426000 00:06:26.112 [2024-11-17T03:23:04.942Z] tsc_hz: 2500000000 (cyc) 00:06:26.112 [2024-11-17T03:23:04.942Z] ====================================== 00:06:26.112 [2024-11-17T03:23:04.942Z] poller_cost: 5881 (cyc), 2352 (nsec) 00:06:26.112 00:06:26.112 real 0m1.176s 00:06:26.112 user 0m1.076s 00:06:26.112 sys 0m0.096s 00:06:26.112 04:23:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.112 04:23:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.112 ************************************ 00:06:26.112 END TEST thread_poller_perf 00:06:26.112 ************************************ 00:06:26.112 04:23:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.112 04:23:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.112 04:23:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.112 04:23:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.372 ************************************ 00:06:26.372 START TEST thread_poller_perf 00:06:26.372 ************************************ 00:06:26.372 04:23:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.372 [2024-11-17 04:23:04.972969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:26.372 [2024-11-17 04:23:04.973040] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123570 ] 00:06:26.372 [2024-11-17 04:23:05.064119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.372 [2024-11-17 04:23:05.084593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.372 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.312 [2024-11-17T03:23:06.142Z] ====================================== 00:06:27.312 [2024-11-17T03:23:06.142Z] busy:2501983382 (cyc) 00:06:27.312 [2024-11-17T03:23:06.142Z] total_run_count: 5738000 00:06:27.312 [2024-11-17T03:23:06.142Z] tsc_hz: 2500000000 (cyc) 00:06:27.312 [2024-11-17T03:23:06.142Z] ====================================== 00:06:27.312 [2024-11-17T03:23:06.142Z] poller_cost: 436 (cyc), 174 (nsec) 00:06:27.312 00:06:27.312 real 0m1.164s 00:06:27.312 user 0m1.077s 00:06:27.312 sys 0m0.082s 00:06:27.312 04:23:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.312 04:23:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.312 ************************************ 00:06:27.312 END TEST thread_poller_perf 00:06:27.312 ************************************ 00:06:27.572 04:23:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.572 00:06:27.572 real 0m2.706s 00:06:27.572 user 0m2.313s 00:06:27.572 sys 0m0.416s 00:06:27.572 04:23:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.572 04:23:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.572 ************************************ 00:06:27.572 END TEST thread 00:06:27.572 ************************************ 00:06:27.572 04:23:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:27.572 04:23:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.572 04:23:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.572 04:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.572 04:23:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.572 ************************************ 00:06:27.572 START TEST app_cmdline 00:06:27.572 ************************************ 00:06:27.572 04:23:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.572 * Looking for test storage... 00:06:27.572 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:27.572 04:23:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.572 04:23:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.572 04:23:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.832 04:23:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.833 04:23:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.833 --rc genhtml_branch_coverage=1 00:06:27.833 --rc genhtml_function_coverage=1 00:06:27.833 --rc genhtml_legend=1 00:06:27.833 --rc geninfo_all_blocks=1 00:06:27.833 --rc geninfo_unexecuted_blocks=1 00:06:27.833 00:06:27.833 ' 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.833 --rc genhtml_branch_coverage=1 00:06:27.833 --rc genhtml_function_coverage=1 00:06:27.833 --rc genhtml_legend=1 00:06:27.833 --rc geninfo_all_blocks=1 00:06:27.833 --rc geninfo_unexecuted_blocks=1 00:06:27.833 00:06:27.833 ' 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.833 --rc genhtml_branch_coverage=1 00:06:27.833 --rc genhtml_function_coverage=1 00:06:27.833 --rc genhtml_legend=1 00:06:27.833 --rc geninfo_all_blocks=1 00:06:27.833 --rc geninfo_unexecuted_blocks=1 00:06:27.833 00:06:27.833 ' 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.833 --rc genhtml_branch_coverage=1 00:06:27.833 --rc genhtml_function_coverage=1 00:06:27.833 --rc genhtml_legend=1 00:06:27.833 --rc geninfo_all_blocks=1 00:06:27.833 --rc geninfo_unexecuted_blocks=1 00:06:27.833 00:06:27.833 ' 00:06:27.833 04:23:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.833 04:23:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=123908 00:06:27.833 04:23:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 123908 00:06:27.833 04:23:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 123908 ']' 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.833 04:23:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.833 [2024-11-17 04:23:06.495407] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:27.833 [2024-11-17 04:23:06.495460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123908 ] 00:06:27.833 [2024-11-17 04:23:06.583930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.833 [2024-11-17 04:23:06.605716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.093 04:23:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.093 04:23:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.093 04:23:06 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:28.353 { 00:06:28.353 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:28.353 "fields": { 00:06:28.353 "major": 25, 00:06:28.353 "minor": 1, 00:06:28.353 "patch": 0, 00:06:28.353 "suffix": "-pre", 00:06:28.353 "commit": "83e8405e4" 00:06:28.353 } 00:06:28.353 } 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.353 04:23:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:28.353 04:23:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.354 04:23:07 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.613 request: 00:06:28.613 { 00:06:28.613 "method": "env_dpdk_get_mem_stats", 00:06:28.613 "req_id": 1 00:06:28.613 } 00:06:28.613 Got JSON-RPC error response 00:06:28.613 response: 00:06:28.613 { 00:06:28.613 "code": -32601, 00:06:28.613 "message": "Method not found" 00:06:28.613 } 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.613 04:23:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 123908 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 123908 ']' 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 123908 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123908 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123908' 00:06:28.613 killing process with pid 123908 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 123908 00:06:28.613 04:23:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 123908 00:06:28.873 00:06:28.873 real 0m1.352s 00:06:28.873 user 0m1.512s 00:06:28.873 sys 0m0.522s 00:06:28.873 04:23:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.873 04:23:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.873 ************************************ 00:06:28.873 END TEST app_cmdline 00:06:28.873 ************************************ 00:06:28.873 04:23:07 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:28.873 04:23:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.873 04:23:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.873 04:23:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.873 ************************************ 00:06:28.873 START TEST version 00:06:28.873 ************************************ 00:06:28.873 04:23:07 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:29.133 * Looking for test storage... 00:06:29.133 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:29.133 04:23:07 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.133 04:23:07 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.133 04:23:07 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.133 04:23:07 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.133 04:23:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.133 04:23:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.133 04:23:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.133 04:23:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.133 04:23:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.133 04:23:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.133 04:23:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.134 04:23:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.134 04:23:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.134 04:23:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.134 04:23:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.134 04:23:07 version -- scripts/common.sh@344 -- # case "$op" in 00:06:29.134 04:23:07 version -- scripts/common.sh@345 -- # : 1 00:06:29.134 04:23:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.134 04:23:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.134 04:23:07 version -- scripts/common.sh@365 -- # decimal 1 00:06:29.134 04:23:07 version -- scripts/common.sh@353 -- # local d=1 00:06:29.134 04:23:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.134 04:23:07 version -- scripts/common.sh@355 -- # echo 1 00:06:29.134 04:23:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.134 04:23:07 version -- scripts/common.sh@366 -- # decimal 2 00:06:29.134 04:23:07 version -- scripts/common.sh@353 -- # local d=2 00:06:29.134 04:23:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.134 04:23:07 version -- scripts/common.sh@355 -- # echo 2 00:06:29.134 04:23:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.134 04:23:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.134 04:23:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.134 04:23:07 version -- scripts/common.sh@368 -- # return 0 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.134 --rc genhtml_branch_coverage=1 00:06:29.134 --rc genhtml_function_coverage=1 00:06:29.134 --rc genhtml_legend=1 00:06:29.134 --rc geninfo_all_blocks=1 00:06:29.134 --rc geninfo_unexecuted_blocks=1 00:06:29.134 00:06:29.134 ' 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.134 --rc genhtml_branch_coverage=1 00:06:29.134 --rc genhtml_function_coverage=1 00:06:29.134 --rc genhtml_legend=1 00:06:29.134 --rc geninfo_all_blocks=1 00:06:29.134 --rc geninfo_unexecuted_blocks=1 00:06:29.134 00:06:29.134 ' 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.134 --rc genhtml_branch_coverage=1 00:06:29.134 --rc genhtml_function_coverage=1 00:06:29.134 --rc genhtml_legend=1 00:06:29.134 --rc geninfo_all_blocks=1 00:06:29.134 --rc geninfo_unexecuted_blocks=1 00:06:29.134 00:06:29.134 ' 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.134 --rc genhtml_branch_coverage=1 00:06:29.134 --rc genhtml_function_coverage=1 00:06:29.134 --rc genhtml_legend=1 00:06:29.134 --rc geninfo_all_blocks=1 00:06:29.134 --rc geninfo_unexecuted_blocks=1 00:06:29.134 00:06:29.134 ' 00:06:29.134 04:23:07 version -- app/version.sh@17 -- # get_header_version major 00:06:29.134 04:23:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # cut -f2 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.134 04:23:07 version -- app/version.sh@17 -- # major=25 00:06:29.134 04:23:07 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.134 04:23:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # cut -f2 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.134 04:23:07 version -- app/version.sh@18 -- # minor=1 00:06:29.134 04:23:07 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.134 04:23:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # cut -f2 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.134 04:23:07 version -- app/version.sh@19 -- # patch=0 00:06:29.134 04:23:07 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.134 04:23:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # cut -f2 00:06:29.134 04:23:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.134 04:23:07 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.134 04:23:07 version -- app/version.sh@22 -- # version=25.1 00:06:29.134 04:23:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.134 04:23:07 version -- app/version.sh@28 -- # version=25.1rc0 00:06:29.134 04:23:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:29.134 04:23:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.134 04:23:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:29.134 04:23:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:29.134 00:06:29.134 real 0m0.265s 00:06:29.134 user 0m0.141s 00:06:29.134 sys 0m0.176s 00:06:29.134 04:23:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.134 04:23:07 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.134 ************************************ 00:06:29.134 END TEST version 00:06:29.134 ************************************ 00:06:29.394 04:23:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:29.394 04:23:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:29.394 04:23:07 -- spdk/autotest.sh@194 -- # uname -s 00:06:29.394 04:23:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:29.394 04:23:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:29.394 04:23:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:29.394 04:23:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:29.394 04:23:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:29.394 04:23:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:29.394 04:23:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.394 04:23:07 -- common/autotest_common.sh@10 -- # set +x 00:06:29.394 04:23:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:29.394 04:23:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:29.394 04:23:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:29.394 04:23:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:29.394 04:23:08 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:06:29.394 04:23:08 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:29.394 04:23:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.394 04:23:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.394 04:23:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.394 ************************************ 00:06:29.394 START TEST nvmf_rdma 00:06:29.394 ************************************ 00:06:29.394 04:23:08 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:29.394 * Looking for test storage... 00:06:29.394 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:29.394 04:23:08 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.394 04:23:08 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.394 04:23:08 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.654 04:23:08 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.654 --rc genhtml_branch_coverage=1 00:06:29.654 --rc genhtml_function_coverage=1 00:06:29.654 --rc genhtml_legend=1 00:06:29.654 --rc geninfo_all_blocks=1 00:06:29.654 --rc geninfo_unexecuted_blocks=1 00:06:29.654 00:06:29.654 ' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.654 --rc genhtml_branch_coverage=1 00:06:29.654 --rc genhtml_function_coverage=1 00:06:29.654 --rc genhtml_legend=1 00:06:29.654 --rc geninfo_all_blocks=1 00:06:29.654 --rc geninfo_unexecuted_blocks=1 00:06:29.654 00:06:29.654 ' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.654 --rc genhtml_branch_coverage=1 00:06:29.654 --rc genhtml_function_coverage=1 00:06:29.654 --rc genhtml_legend=1 00:06:29.654 --rc geninfo_all_blocks=1 00:06:29.654 --rc geninfo_unexecuted_blocks=1 00:06:29.654 00:06:29.654 ' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.654 --rc genhtml_branch_coverage=1 00:06:29.654 --rc genhtml_function_coverage=1 00:06:29.654 --rc genhtml_legend=1 00:06:29.654 --rc geninfo_all_blocks=1 00:06:29.654 --rc geninfo_unexecuted_blocks=1 00:06:29.654 00:06:29.654 ' 00:06:29.654 04:23:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:29.654 04:23:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:29.654 04:23:08 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.654 04:23:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:29.654 ************************************ 00:06:29.654 START TEST nvmf_target_core 00:06:29.654 ************************************ 00:06:29.654 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:29.654 * Looking for test storage... 00:06:29.654 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:29.654 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.654 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.654 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.916 --rc genhtml_branch_coverage=1 00:06:29.916 --rc genhtml_function_coverage=1 00:06:29.916 --rc genhtml_legend=1 00:06:29.916 --rc geninfo_all_blocks=1 00:06:29.916 --rc geninfo_unexecuted_blocks=1 00:06:29.916 00:06:29.916 ' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.916 --rc genhtml_branch_coverage=1 00:06:29.916 --rc genhtml_function_coverage=1 00:06:29.916 --rc genhtml_legend=1 00:06:29.916 --rc geninfo_all_blocks=1 00:06:29.916 --rc geninfo_unexecuted_blocks=1 00:06:29.916 00:06:29.916 ' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.916 --rc genhtml_branch_coverage=1 00:06:29.916 --rc genhtml_function_coverage=1 00:06:29.916 --rc genhtml_legend=1 00:06:29.916 --rc geninfo_all_blocks=1 00:06:29.916 --rc geninfo_unexecuted_blocks=1 00:06:29.916 00:06:29.916 ' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.916 --rc genhtml_branch_coverage=1 00:06:29.916 --rc genhtml_function_coverage=1 00:06:29.916 --rc genhtml_legend=1 00:06:29.916 --rc geninfo_all_blocks=1 00:06:29.916 --rc geninfo_unexecuted_blocks=1 00:06:29.916 00:06:29.916 ' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.916 04:23:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.917 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.917 ************************************ 00:06:29.917 START TEST nvmf_abort 00:06:29.917 ************************************ 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:29.917 * Looking for test storage... 00:06:29.917 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.917 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.178 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.179 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.179 04:23:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:38.313 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:38.313 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:38.314 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:38.314 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:38.314 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:38.314 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:38.314 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:38.314 altname enp217s0f0np0 00:06:38.314 altname ens818f0np0 00:06:38.314 inet 192.168.100.8/24 scope global mlx_0_0 00:06:38.314 valid_lft forever preferred_lft forever 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:38.314 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:38.314 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:38.314 altname enp217s0f1np1 00:06:38.314 altname ens818f1np1 00:06:38.314 inet 192.168.100.9/24 scope global mlx_0_1 00:06:38.314 valid_lft forever preferred_lft forever 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:38.314 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:38.315 192.168.100.9' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:38.315 192.168.100.9' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:38.315 192.168.100.9' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=127781 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 127781 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 127781 ']' 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.315 04:23:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 [2024-11-17 04:23:15.988857] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:38.315 [2024-11-17 04:23:15.988922] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.315 [2024-11-17 04:23:16.080030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.315 [2024-11-17 04:23:16.103510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.315 [2024-11-17 04:23:16.103547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.315 [2024-11-17 04:23:16.103557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.315 [2024-11-17 04:23:16.103565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.315 [2024-11-17 04:23:16.103572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.315 [2024-11-17 04:23:16.105049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.315 [2024-11-17 04:23:16.105135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.315 [2024-11-17 04:23:16.105137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 [2024-11-17 04:23:16.282018] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e6b3d0/0x1e6f880) succeed. 00:06:38.315 [2024-11-17 04:23:16.300940] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e6c970/0x1eb0f20) succeed. 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 Malloc0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 Delay0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 [2024-11-17 04:23:16.472149] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.315 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.316 04:23:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:38.316 [2024-11-17 04:23:16.597699] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:40.227 Initializing NVMe Controllers 00:06:40.227 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:40.227 controller IO queue size 128 less than required 00:06:40.227 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:40.227 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:40.227 Initialization complete. Launching workers. 00:06:40.227 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42917 00:06:40.227 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42978, failed to submit 62 00:06:40.227 success 42918, unsuccessful 60, failed 0 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:40.227 rmmod nvme_rdma 00:06:40.227 rmmod nvme_fabrics 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 127781 ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 127781 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 127781 ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 127781 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127781 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127781' 00:06:40.227 killing process with pid 127781 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 127781 00:06:40.227 04:23:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 127781 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:40.491 00:06:40.491 real 0m10.497s 00:06:40.491 user 0m13.027s 00:06:40.491 sys 0m5.923s 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:40.491 ************************************ 00:06:40.491 END TEST nvmf_abort 00:06:40.491 ************************************ 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.491 ************************************ 00:06:40.491 START TEST nvmf_ns_hotplug_stress 00:06:40.491 ************************************ 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:40.491 * Looking for test storage... 00:06:40.491 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.491 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.755 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.756 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.756 04:23:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:48.894 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:48.894 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.894 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:48.894 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:48.895 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:48.895 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:48.895 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:48.895 altname enp217s0f0np0 00:06:48.895 altname ens818f0np0 00:06:48.895 inet 192.168.100.8/24 scope global mlx_0_0 00:06:48.895 valid_lft forever preferred_lft forever 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:48.895 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:48.895 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:48.895 altname enp217s0f1np1 00:06:48.895 altname ens818f1np1 00:06:48.895 inet 192.168.100.9/24 scope global mlx_0_1 00:06:48.895 valid_lft forever preferred_lft forever 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:48.895 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:48.895 192.168.100.9' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:48.896 192.168.100.9' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:48.896 192.168.100.9' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=131769 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 131769 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 131769 ']' 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 [2024-11-17 04:23:26.746952] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:48.896 [2024-11-17 04:23:26.747005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.896 [2024-11-17 04:23:26.838948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.896 [2024-11-17 04:23:26.860513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.896 [2024-11-17 04:23:26.860547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.896 [2024-11-17 04:23:26.860557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.896 [2024-11-17 04:23:26.860565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.896 [2024-11-17 04:23:26.860572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.896 [2024-11-17 04:23:26.862100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.896 [2024-11-17 04:23:26.862130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.896 [2024-11-17 04:23:26.862131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.896 04:23:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:48.896 [2024-11-17 04:23:27.195394] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19623d0/0x1966880) succeed. 00:06:48.896 [2024-11-17 04:23:27.204557] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1963970/0x19a7f20) succeed. 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:48.896 [2024-11-17 04:23:27.681969] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:48.896 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:49.169 04:23:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:49.429 Malloc0 00:06:49.429 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:49.689 Delay0 00:06:49.689 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.689 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:49.948 NULL1 00:06:49.948 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:50.208 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:50.208 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=132072 00:06:50.208 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:50.208 04:23:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.595 Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 04:23:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.595 04:23:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:51.595 04:23:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:51.855 true 00:06:51.855 04:23:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:51.855 04:23:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 04:23:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.795 04:23:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:52.795 04:23:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:53.055 true 00:06:53.055 04:23:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:53.055 04:23:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 04:23:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.005 04:23:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:54.005 04:23:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:54.265 true 00:06:54.265 04:23:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:54.265 04:23:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 04:23:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.205 04:23:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:55.205 04:23:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:55.465 true 00:06:55.465 04:23:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:55.465 04:23:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 04:23:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.406 04:23:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:56.406 04:23:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:56.666 true 00:06:56.666 04:23:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:56.666 04:23:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.606 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.606 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:57.606 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:57.866 true 00:06:57.866 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:57.866 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.866 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.126 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:58.126 04:23:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:58.387 true 00:06:58.387 04:23:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:58.387 04:23:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.327 04:23:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.587 04:23:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:59.587 04:23:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:59.846 true 00:06:59.846 04:23:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:06:59.846 04:23:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 04:23:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.784 04:23:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:00.784 04:23:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:01.043 true 00:07:01.043 04:23:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:01.043 04:23:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 04:23:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.982 04:23:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:01.982 04:23:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:02.241 true 00:07:02.241 04:23:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:02.241 04:23:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 04:23:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.194 04:23:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:03.194 04:23:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:03.454 true 00:07:03.454 04:23:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:03.454 04:23:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 04:23:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.396 04:23:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:04.396 04:23:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:04.656 true 00:07:04.656 04:23:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:04.656 04:23:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.596 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.596 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:05.596 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:05.856 true 00:07:05.856 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:05.856 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.116 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.375 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:06.375 04:23:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:06.375 true 00:07:06.375 04:23:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:06.375 04:23:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 04:23:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.757 04:23:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:07.757 04:23:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:08.017 true 00:07:08.017 04:23:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:08.017 04:23:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 04:23:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.957 04:23:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:08.957 04:23:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:09.217 true 00:07:09.217 04:23:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:09.217 04:23:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 04:23:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.156 04:23:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:10.156 04:23:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:10.416 true 00:07:10.416 04:23:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:10.416 04:23:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 04:23:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.357 04:23:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:11.357 04:23:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:11.617 true 00:07:11.617 04:23:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:11.617 04:23:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 04:23:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.556 04:23:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:12.556 04:23:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:12.816 true 00:07:12.816 04:23:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:12.816 04:23:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.756 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.756 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:13.756 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:14.017 true 00:07:14.017 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:14.017 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.277 04:23:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.277 04:23:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:14.277 04:23:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:14.537 true 00:07:14.537 04:23:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:14.537 04:23:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 04:23:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.738 04:23:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:15.738 04:23:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:15.998 true 00:07:15.998 04:23:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:15.998 04:23:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 04:23:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.937 04:23:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:16.937 04:23:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:17.198 true 00:07:17.198 04:23:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:17.198 04:23:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 04:23:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.139 04:23:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:18.139 04:23:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:18.399 true 00:07:18.399 04:23:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:18.399 04:23:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 04:23:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.339 04:23:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:19.339 04:23:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:19.600 true 00:07:19.600 04:23:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:19.600 04:23:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.552 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.552 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:20.552 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:20.811 true 00:07:20.811 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:20.811 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.071 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.330 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:21.331 04:23:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:21.331 true 00:07:21.331 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:21.331 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.589 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.849 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:21.849 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:22.109 true 00:07:22.109 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:22.109 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.369 04:24:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.369 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:22.369 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:22.369 Initializing NVMe Controllers 00:07:22.369 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.369 Controller IO queue size 128, less than required. 00:07:22.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.369 Controller IO queue size 128, less than required. 00:07:22.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:22.369 Initialization complete. Launching workers. 00:07:22.369 ======================================================== 00:07:22.369 Latency(us) 00:07:22.369 Device Information : IOPS MiB/s Average min max 00:07:22.369 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5360.73 2.62 21583.97 1011.30 1006887.47 00:07:22.369 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35742.17 17.45 3581.00 1896.06 285513.94 00:07:22.369 ======================================================== 00:07:22.369 Total : 41102.90 20.07 5928.99 1011.30 1006887.47 00:07:22.369 00:07:22.629 true 00:07:22.629 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 132072 00:07:22.629 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (132072) - No such process 00:07:22.629 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 132072 00:07:22.629 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.889 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:23.148 null0 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.148 04:24:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:23.408 null1 00:07:23.408 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.408 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.408 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:23.667 null2 00:07:23.667 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.668 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.668 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:23.927 null3 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:23.927 null4 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.927 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:24.187 null5 00:07:24.187 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.187 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.187 04:24:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:24.447 null6 00:07:24.447 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.447 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.447 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:24.709 null7 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 138215 138217 138221 138223 138227 138230 138233 138236 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.709 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.970 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.971 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.232 04:24:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.232 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.492 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.493 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.752 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.010 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.010 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.011 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.270 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.271 04:24:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.271 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.271 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.271 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.530 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.530 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.530 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.530 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.531 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.531 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.531 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.531 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.791 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.051 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.052 04:24:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.312 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.572 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.573 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.833 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.093 04:24:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.352 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.612 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:28.872 rmmod nvme_rdma 00:07:28.872 rmmod nvme_fabrics 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 131769 ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 131769 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 131769 ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 131769 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131769 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131769' 00:07:28.872 killing process with pid 131769 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 131769 00:07:28.872 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 131769 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:29.145 00:07:29.145 real 0m48.631s 00:07:29.145 user 3m20.100s 00:07:29.145 sys 0m14.394s 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 ************************************ 00:07:29.145 END TEST nvmf_ns_hotplug_stress 00:07:29.145 ************************************ 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 ************************************ 00:07:29.145 START TEST nvmf_delete_subsystem 00:07:29.145 ************************************ 00:07:29.145 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:29.145 * Looking for test storage... 00:07:29.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:29.415 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.416 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.416 04:24:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.416 --rc genhtml_branch_coverage=1 00:07:29.416 --rc genhtml_function_coverage=1 00:07:29.416 --rc genhtml_legend=1 00:07:29.416 --rc geninfo_all_blocks=1 00:07:29.416 --rc geninfo_unexecuted_blocks=1 00:07:29.416 00:07:29.416 ' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.416 --rc genhtml_branch_coverage=1 00:07:29.416 --rc genhtml_function_coverage=1 00:07:29.416 --rc genhtml_legend=1 00:07:29.416 --rc geninfo_all_blocks=1 00:07:29.416 --rc geninfo_unexecuted_blocks=1 00:07:29.416 00:07:29.416 ' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.416 --rc genhtml_branch_coverage=1 00:07:29.416 --rc genhtml_function_coverage=1 00:07:29.416 --rc genhtml_legend=1 00:07:29.416 --rc geninfo_all_blocks=1 00:07:29.416 --rc geninfo_unexecuted_blocks=1 00:07:29.416 00:07:29.416 ' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.416 --rc genhtml_branch_coverage=1 00:07:29.416 --rc genhtml_function_coverage=1 00:07:29.416 --rc genhtml_legend=1 00:07:29.416 --rc geninfo_all_blocks=1 00:07:29.416 --rc geninfo_unexecuted_blocks=1 00:07:29.416 00:07:29.416 ' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.416 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.417 04:24:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.554 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:37.555 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:37.555 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:37.555 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:37.555 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:37.555 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:37.555 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:37.555 altname enp217s0f0np0 00:07:37.555 altname ens818f0np0 00:07:37.555 inet 192.168.100.8/24 scope global mlx_0_0 00:07:37.555 valid_lft forever preferred_lft forever 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:37.555 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:37.555 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:37.555 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:37.555 altname enp217s0f1np1 00:07:37.555 altname ens818f1np1 00:07:37.555 inet 192.168.100.9/24 scope global mlx_0_1 00:07:37.555 valid_lft forever preferred_lft forever 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:37.556 192.168.100.9' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:37.556 192.168.100.9' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:37.556 192.168.100.9' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=142979 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 142979 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 142979 ']' 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 [2024-11-17 04:24:15.386898] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:37.556 [2024-11-17 04:24:15.386978] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.556 [2024-11-17 04:24:15.477549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.556 [2024-11-17 04:24:15.498446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.556 [2024-11-17 04:24:15.498483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.556 [2024-11-17 04:24:15.498492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.556 [2024-11-17 04:24:15.498501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.556 [2024-11-17 04:24:15.498524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.556 [2024-11-17 04:24:15.499730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.556 [2024-11-17 04:24:15.499730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 [2024-11-17 04:24:15.668092] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1adf590/0x1ae3a40) succeed. 00:07:37.556 [2024-11-17 04:24:15.677991] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ae0a90/0x1b250e0) succeed. 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 [2024-11-17 04:24:15.763944] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 NULL1 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.556 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 Delay0 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=143076 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:37.557 04:24:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.557 [2024-11-17 04:24:15.908204] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:39.468 04:24:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.468 04:24:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.468 04:24:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 NVMe io qpair process completion error 00:07:40.408 04:24:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.408 04:24:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:40.408 04:24:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 143076 00:07:40.408 04:24:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:40.976 04:24:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:40.976 04:24:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 143076 00:07:40.976 04:24:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Read completed with error (sct=0, sc=8) 00:07:41.236 starting I/O failed: -6 00:07:41.236 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 starting I/O failed: -6 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Write completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Read completed with error (sct=0, sc=8) 00:07:41.237 Initializing NVMe Controllers 00:07:41.237 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.237 Controller IO queue size 128, less than required. 00:07:41.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.237 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:41.237 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:41.237 Initialization complete. Launching workers. 00:07:41.237 ======================================================== 00:07:41.237 Latency(us) 00:07:41.237 Device Information : IOPS MiB/s Average min max 00:07:41.237 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.60 0.04 1591962.87 1000122.71 2969114.27 00:07:41.237 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.60 0.04 1592955.36 1001146.10 2969838.99 00:07:41.237 ======================================================== 00:07:41.237 Total : 161.19 0.08 1592459.11 1000122.71 2969838.99 00:07:41.237 00:07:41.237 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:41.238 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 143076 00:07:41.238 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:41.238 [2024-11-17 04:24:20.015674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:07:41.238 [2024-11-17 04:24:20.015715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:07:41.238 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 143076 00:07:41.808 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (143076) - No such process 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 143076 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 143076 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 143076 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 [2024-11-17 04:24:20.541771] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=143936 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:41.808 04:24:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.068 [2024-11-17 04:24:20.652162] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.329 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.329 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:42.329 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.899 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.899 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:42.899 04:24:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.469 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.469 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:43.469 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.039 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.039 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:44.039 04:24:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.299 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.299 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:44.299 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.869 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.869 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:44.869 04:24:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.439 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.439 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:45.439 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.008 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.008 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:46.008 04:24:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.578 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.578 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:46.578 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.838 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.838 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:46.838 04:24:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.408 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.408 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:47.408 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.978 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.978 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:47.978 04:24:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.548 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.548 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:48.548 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.807 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.807 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:48.807 04:24:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.066 Initializing NVMe Controllers 00:07:49.066 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.066 Controller IO queue size 128, less than required. 00:07:49.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.066 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:49.066 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:49.066 Initialization complete. Launching workers. 00:07:49.066 ======================================================== 00:07:49.066 Latency(us) 00:07:49.066 Device Information : IOPS MiB/s Average min max 00:07:49.066 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001259.86 1000059.47 1004126.99 00:07:49.066 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002393.01 1000107.24 1005654.35 00:07:49.066 ======================================================== 00:07:49.066 Total : 256.00 0.12 1001826.44 1000059.47 1005654.35 00:07:49.066 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 143936 00:07:49.326 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (143936) - No such process 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 143936 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.326 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:49.326 rmmod nvme_rdma 00:07:49.586 rmmod nvme_fabrics 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 142979 ']' 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 142979 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 142979 ']' 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 142979 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142979 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142979' 00:07:49.587 killing process with pid 142979 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 142979 00:07:49.587 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 142979 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:49.847 00:07:49.847 real 0m20.594s 00:07:49.847 user 0m49.252s 00:07:49.847 sys 0m6.762s 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.847 ************************************ 00:07:49.847 END TEST nvmf_delete_subsystem 00:07:49.847 ************************************ 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.847 ************************************ 00:07:49.847 START TEST nvmf_host_management 00:07:49.847 ************************************ 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:49.847 * Looking for test storage... 00:07:49.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.847 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.108 --rc genhtml_branch_coverage=1 00:07:50.108 --rc genhtml_function_coverage=1 00:07:50.108 --rc genhtml_legend=1 00:07:50.108 --rc geninfo_all_blocks=1 00:07:50.108 --rc geninfo_unexecuted_blocks=1 00:07:50.108 00:07:50.108 ' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.108 --rc genhtml_branch_coverage=1 00:07:50.108 --rc genhtml_function_coverage=1 00:07:50.108 --rc genhtml_legend=1 00:07:50.108 --rc geninfo_all_blocks=1 00:07:50.108 --rc geninfo_unexecuted_blocks=1 00:07:50.108 00:07:50.108 ' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.108 --rc genhtml_branch_coverage=1 00:07:50.108 --rc genhtml_function_coverage=1 00:07:50.108 --rc genhtml_legend=1 00:07:50.108 --rc geninfo_all_blocks=1 00:07:50.108 --rc geninfo_unexecuted_blocks=1 00:07:50.108 00:07:50.108 ' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.108 --rc genhtml_branch_coverage=1 00:07:50.108 --rc genhtml_function_coverage=1 00:07:50.108 --rc genhtml_legend=1 00:07:50.108 --rc geninfo_all_blocks=1 00:07:50.108 --rc geninfo_unexecuted_blocks=1 00:07:50.108 00:07:50.108 ' 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.108 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.109 04:24:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:58.246 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:58.246 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.246 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:58.247 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:58.247 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:58.247 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.247 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:58.247 altname enp217s0f0np0 00:07:58.247 altname ens818f0np0 00:07:58.247 inet 192.168.100.8/24 scope global mlx_0_0 00:07:58.247 valid_lft forever preferred_lft forever 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:58.247 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.247 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:58.247 altname enp217s0f1np1 00:07:58.247 altname ens818f1np1 00:07:58.247 inet 192.168.100.9/24 scope global mlx_0_1 00:07:58.247 valid_lft forever preferred_lft forever 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:58.247 04:24:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.247 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:58.247 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.247 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:58.248 192.168.100.9' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:58.248 192.168.100.9' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:58.248 192.168.100.9' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=148663 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 148663 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 148663 ']' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 [2024-11-17 04:24:36.145396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:58.248 [2024-11-17 04:24:36.145453] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.248 [2024-11-17 04:24:36.239197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.248 [2024-11-17 04:24:36.262581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.248 [2024-11-17 04:24:36.262617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.248 [2024-11-17 04:24:36.262627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.248 [2024-11-17 04:24:36.262636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.248 [2024-11-17 04:24:36.262644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.248 [2024-11-17 04:24:36.264374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.248 [2024-11-17 04:24:36.264412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.248 [2024-11-17 04:24:36.264497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.248 [2024-11-17 04:24:36.264498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 [2024-11-17 04:24:36.425398] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x137cf50/0x1381400) succeed. 00:07:58.248 [2024-11-17 04:24:36.435179] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x137e590/0x13c2aa0) succeed. 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 Malloc0 00:07:58.248 [2024-11-17 04:24:36.623511] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=148917 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 148917 /var/tmp/bdevperf.sock 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 148917 ']' 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.248 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:58.249 { 00:07:58.249 "params": { 00:07:58.249 "name": "Nvme$subsystem", 00:07:58.249 "trtype": "$TEST_TRANSPORT", 00:07:58.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:58.249 "adrfam": "ipv4", 00:07:58.249 "trsvcid": "$NVMF_PORT", 00:07:58.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:58.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:58.249 "hdgst": ${hdgst:-false}, 00:07:58.249 "ddgst": ${ddgst:-false} 00:07:58.249 }, 00:07:58.249 "method": "bdev_nvme_attach_controller" 00:07:58.249 } 00:07:58.249 EOF 00:07:58.249 )") 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:58.249 04:24:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:58.249 "params": { 00:07:58.249 "name": "Nvme0", 00:07:58.249 "trtype": "rdma", 00:07:58.249 "traddr": "192.168.100.8", 00:07:58.249 "adrfam": "ipv4", 00:07:58.249 "trsvcid": "4420", 00:07:58.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:58.249 "hdgst": false, 00:07:58.249 "ddgst": false 00:07:58.249 }, 00:07:58.249 "method": "bdev_nvme_attach_controller" 00:07:58.249 }' 00:07:58.249 [2024-11-17 04:24:36.729177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:58.249 [2024-11-17 04:24:36.729230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148917 ] 00:07:58.249 [2024-11-17 04:24:36.821661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.249 [2024-11-17 04:24:36.844032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.249 Running I/O for 10 seconds... 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1771 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1771 -ge 100 ']' 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.820 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.080 04:24:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:59.912 1920.00 IOPS, 120.00 MiB/s [2024-11-17T03:24:38.742Z] [2024-11-17 04:24:38.658602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.658747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3fa80 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.658987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.658998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2fa00 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f900 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff880 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef800 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf780 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x182000 00:07:59.912 [2024-11-17 04:24:38.659362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x181f00 00:07:59.912 [2024-11-17 04:24:38.659383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x181f00 00:07:59.912 [2024-11-17 04:24:38.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x182100 00:07:59.912 [2024-11-17 04:24:38.659422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x182b00 00:07:59.912 [2024-11-17 04:24:38.659443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x182b00 00:07:59.912 [2024-11-17 04:24:38.659463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182b00 00:07:59.912 [2024-11-17 04:24:38.659482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182b00 00:07:59.912 [2024-11-17 04:24:38.659503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182b00 00:07:59.912 [2024-11-17 04:24:38.659523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.912 [2024-11-17 04:24:38.659533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cf000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ae000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c18d000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c16c000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c14b000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c12a000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c109000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e8000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c7000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a6000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c085000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c064000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c043000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c022000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c001000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfe0000 len:0x10000 key:0x182b00 00:07:59.913 [2024-11-17 04:24:38.659894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:9791 cdw0:7b959000 sqhd:1a1a p:0 m:0 dnr:0 00:07:59.913 [2024-11-17 04:24:38.663011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:59.913 task offset: 117760 on job bdev=Nvme0n1 fails 00:07:59.913 00:07:59.913 Latency(us) 00:07:59.913 [2024-11-17T03:24:38.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.913 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:59.913 Job: Nvme0n1 ended in about 1.64 seconds with error 00:07:59.913 Verification LBA range: start 0x0 length 0x400 00:07:59.913 Nvme0n1 : 1.64 1169.83 73.11 38.99 0.00 52441.92 2202.01 1020054.73 00:07:59.913 [2024-11-17T03:24:38.743Z] =================================================================================================================== 00:07:59.913 [2024-11-17T03:24:38.743Z] Total : 1169.83 73.11 38.99 0.00 52441.92 2202.01 1020054.73 00:07:59.913 [2024-11-17 04:24:38.665452] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 148917 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.913 { 00:07:59.913 "params": { 00:07:59.913 "name": "Nvme$subsystem", 00:07:59.913 "trtype": "$TEST_TRANSPORT", 00:07:59.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.913 "adrfam": "ipv4", 00:07:59.913 "trsvcid": "$NVMF_PORT", 00:07:59.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.913 "hdgst": ${hdgst:-false}, 00:07:59.913 "ddgst": ${ddgst:-false} 00:07:59.913 }, 00:07:59.913 "method": "bdev_nvme_attach_controller" 00:07:59.913 } 00:07:59.913 EOF 00:07:59.913 )") 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:59.913 04:24:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.913 "params": { 00:07:59.913 "name": "Nvme0", 00:07:59.913 "trtype": "rdma", 00:07:59.913 "traddr": "192.168.100.8", 00:07:59.913 "adrfam": "ipv4", 00:07:59.913 "trsvcid": "4420", 00:07:59.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:59.913 "hdgst": false, 00:07:59.913 "ddgst": false 00:07:59.913 }, 00:07:59.913 "method": "bdev_nvme_attach_controller" 00:07:59.913 }' 00:07:59.913 [2024-11-17 04:24:38.721219] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:59.913 [2024-11-17 04:24:38.721269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149199 ] 00:08:00.173 [2024-11-17 04:24:38.811794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.173 [2024-11-17 04:24:38.834176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.433 Running I/O for 1 seconds... 00:08:01.373 3117.00 IOPS, 194.81 MiB/s 00:08:01.373 Latency(us) 00:08:01.373 [2024-11-17T03:24:40.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.373 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.373 Verification LBA range: start 0x0 length 0x400 00:08:01.373 Nvme0n1 : 1.01 3137.67 196.10 0.00 0.00 19990.87 321.13 40265.32 00:08:01.373 [2024-11-17T03:24:40.203Z] =================================================================================================================== 00:08:01.373 [2024-11-17T03:24:40.203Z] Total : 3137.67 196.10 0.00 0.00 19990.87 321.13 40265.32 00:08:01.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 148917 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:01.633 rmmod nvme_rdma 00:08:01.633 rmmod nvme_fabrics 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 148663 ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 148663 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 148663 ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 148663 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148663 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148663' 00:08:01.633 killing process with pid 148663 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 148663 00:08:01.633 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 148663 00:08:01.893 [2024-11-17 04:24:40.569584] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:01.893 00:08:01.893 real 0m12.042s 00:08:01.893 user 0m22.674s 00:08:01.893 sys 0m6.626s 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.893 ************************************ 00:08:01.893 END TEST nvmf_host_management 00:08:01.893 ************************************ 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.893 ************************************ 00:08:01.893 START TEST nvmf_lvol 00:08:01.893 ************************************ 00:08:01.893 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:02.153 * Looking for test storage... 00:08:02.153 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:02.153 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.154 --rc genhtml_branch_coverage=1 00:08:02.154 --rc genhtml_function_coverage=1 00:08:02.154 --rc genhtml_legend=1 00:08:02.154 --rc geninfo_all_blocks=1 00:08:02.154 --rc geninfo_unexecuted_blocks=1 00:08:02.154 00:08:02.154 ' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.154 --rc genhtml_branch_coverage=1 00:08:02.154 --rc genhtml_function_coverage=1 00:08:02.154 --rc genhtml_legend=1 00:08:02.154 --rc geninfo_all_blocks=1 00:08:02.154 --rc geninfo_unexecuted_blocks=1 00:08:02.154 00:08:02.154 ' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.154 --rc genhtml_branch_coverage=1 00:08:02.154 --rc genhtml_function_coverage=1 00:08:02.154 --rc genhtml_legend=1 00:08:02.154 --rc geninfo_all_blocks=1 00:08:02.154 --rc geninfo_unexecuted_blocks=1 00:08:02.154 00:08:02.154 ' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.154 --rc genhtml_branch_coverage=1 00:08:02.154 --rc genhtml_function_coverage=1 00:08:02.154 --rc genhtml_legend=1 00:08:02.154 --rc geninfo_all_blocks=1 00:08:02.154 --rc geninfo_unexecuted_blocks=1 00:08:02.154 00:08:02.154 ' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.154 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:02.154 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.155 04:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.300 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:10.301 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:10.301 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:10.301 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:10.301 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:10.301 04:24:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:10.301 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.301 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:10.301 altname enp217s0f0np0 00:08:10.301 altname ens818f0np0 00:08:10.301 inet 192.168.100.8/24 scope global mlx_0_0 00:08:10.301 valid_lft forever preferred_lft forever 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:10.301 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:10.302 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.302 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:10.302 altname enp217s0f1np1 00:08:10.302 altname ens818f1np1 00:08:10.302 inet 192.168.100.9/24 scope global mlx_0_1 00:08:10.302 valid_lft forever preferred_lft forever 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:10.302 192.168.100.9' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:10.302 192.168.100.9' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:10.302 192.168.100.9' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=152918 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 152918 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 152918 ']' 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 [2024-11-17 04:24:48.260673] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:10.302 [2024-11-17 04:24:48.260721] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.302 [2024-11-17 04:24:48.351684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.302 [2024-11-17 04:24:48.373233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.302 [2024-11-17 04:24:48.373270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.302 [2024-11-17 04:24:48.373280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.302 [2024-11-17 04:24:48.373287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.302 [2024-11-17 04:24:48.373310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.302 [2024-11-17 04:24:48.374719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.302 [2024-11-17 04:24:48.374831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.302 [2024-11-17 04:24:48.374833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:10.302 [2024-11-17 04:24:48.707514] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5f60d0/0x5fa580) succeed. 00:08:10.302 [2024-11-17 04:24:48.716501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5f7670/0x63bc20) succeed. 00:08:10.302 04:24:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.302 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:10.302 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.570 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:10.570 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:10.834 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:11.095 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bbc3626d-16dd-4228-a17a-9e6e644bfc81 00:08:11.095 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bbc3626d-16dd-4228-a17a-9e6e644bfc81 lvol 20 00:08:11.095 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1e437ff4-c878-4bfd-8751-bf8c4e2a4120 00:08:11.095 04:24:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.355 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e437ff4-c878-4bfd-8751-bf8c4e2a4120 00:08:11.615 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:11.875 [2024-11-17 04:24:50.456989] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:11.875 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:11.875 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=153475 00:08:11.875 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:11.875 04:24:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:13.257 04:24:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1e437ff4-c878-4bfd-8751-bf8c4e2a4120 MY_SNAPSHOT 00:08:13.257 04:24:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e6adb06f-abe8-4bf2-8f3f-6c554889561c 00:08:13.257 04:24:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1e437ff4-c878-4bfd-8751-bf8c4e2a4120 30 00:08:13.517 04:24:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e6adb06f-abe8-4bf2-8f3f-6c554889561c MY_CLONE 00:08:13.517 04:24:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aa1fd19e-5399-4395-8ec3-1bf3ecd35459 00:08:13.517 04:24:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aa1fd19e-5399-4395-8ec3-1bf3ecd35459 00:08:13.777 04:24:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 153475 00:08:23.771 Initializing NVMe Controllers 00:08:23.771 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:23.771 Controller IO queue size 128, less than required. 00:08:23.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:23.771 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:23.771 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:23.771 Initialization complete. Launching workers. 00:08:23.771 ======================================================== 00:08:23.771 Latency(us) 00:08:23.771 Device Information : IOPS MiB/s Average min max 00:08:23.771 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16886.00 65.96 7581.33 1986.97 38340.76 00:08:23.771 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16742.20 65.40 7646.18 3437.11 34768.19 00:08:23.771 ======================================================== 00:08:23.771 Total : 33628.20 131.36 7613.62 1986.97 38340.76 00:08:23.771 00:08:23.771 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.771 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e437ff4-c878-4bfd-8751-bf8c4e2a4120 00:08:23.771 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bbc3626d-16dd-4228-a17a-9e6e644bfc81 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:24.031 rmmod nvme_rdma 00:08:24.031 rmmod nvme_fabrics 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 152918 ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 152918 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 152918 ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 152918 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152918 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152918' 00:08:24.031 killing process with pid 152918 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 152918 00:08:24.031 04:25:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 152918 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:24.291 00:08:24.291 real 0m22.393s 00:08:24.291 user 1m11.014s 00:08:24.291 sys 0m6.704s 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.291 ************************************ 00:08:24.291 END TEST nvmf_lvol 00:08:24.291 ************************************ 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.291 04:25:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.550 ************************************ 00:08:24.550 START TEST nvmf_lvs_grow 00:08:24.550 ************************************ 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:24.550 * Looking for test storage... 00:08:24.550 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.550 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.551 --rc genhtml_branch_coverage=1 00:08:24.551 --rc genhtml_function_coverage=1 00:08:24.551 --rc genhtml_legend=1 00:08:24.551 --rc geninfo_all_blocks=1 00:08:24.551 --rc geninfo_unexecuted_blocks=1 00:08:24.551 00:08:24.551 ' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.551 --rc genhtml_branch_coverage=1 00:08:24.551 --rc genhtml_function_coverage=1 00:08:24.551 --rc genhtml_legend=1 00:08:24.551 --rc geninfo_all_blocks=1 00:08:24.551 --rc geninfo_unexecuted_blocks=1 00:08:24.551 00:08:24.551 ' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.551 --rc genhtml_branch_coverage=1 00:08:24.551 --rc genhtml_function_coverage=1 00:08:24.551 --rc genhtml_legend=1 00:08:24.551 --rc geninfo_all_blocks=1 00:08:24.551 --rc geninfo_unexecuted_blocks=1 00:08:24.551 00:08:24.551 ' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.551 --rc genhtml_branch_coverage=1 00:08:24.551 --rc genhtml_function_coverage=1 00:08:24.551 --rc genhtml_legend=1 00:08:24.551 --rc geninfo_all_blocks=1 00:08:24.551 --rc geninfo_unexecuted_blocks=1 00:08:24.551 00:08:24.551 ' 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.551 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.811 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.812 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.812 04:25:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.947 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:32.948 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:32.948 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:32.948 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:32.948 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:32.948 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:32.949 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.949 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:32.949 altname enp217s0f0np0 00:08:32.949 altname ens818f0np0 00:08:32.949 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.949 valid_lft forever preferred_lft forever 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:32.949 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.949 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:32.949 altname enp217s0f1np1 00:08:32.949 altname ens818f1np1 00:08:32.949 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.949 valid_lft forever preferred_lft forever 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:32.949 192.168.100.9' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:32.949 192.168.100.9' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:32.949 192.168.100.9' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=158960 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 158960 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 158960 ']' 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.949 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.949 [2024-11-17 04:25:10.803872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:32.949 [2024-11-17 04:25:10.803918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.949 [2024-11-17 04:25:10.893734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.949 [2024-11-17 04:25:10.914580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.949 [2024-11-17 04:25:10.914614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.949 [2024-11-17 04:25:10.914624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.949 [2024-11-17 04:25:10.914632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.949 [2024-11-17 04:25:10.914656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.950 [2024-11-17 04:25:10.915278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.950 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.950 04:25:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:32.950 [2024-11-17 04:25:11.231341] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x157ac50/0x157f100) succeed. 00:08:32.950 [2024-11-17 04:25:11.239859] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x157c0b0/0x15c07a0) succeed. 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.950 ************************************ 00:08:32.950 START TEST lvs_grow_clean 00:08:32.950 ************************************ 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=90b9c240-323d-421a-99ea-c466ade6eede 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:32.950 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.210 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.210 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.210 04:25:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 90b9c240-323d-421a-99ea-c466ade6eede lvol 150 00:08:33.471 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cdd5abbc-891b-4044-aa6c-71d3b4161559 00:08:33.471 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.471 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.732 [2024-11-17 04:25:12.317974] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.732 [2024-11-17 04:25:12.318023] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.732 true 00:08:33.732 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.732 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:33.732 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.732 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.992 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cdd5abbc-891b-4044-aa6c-71d3b4161559 00:08:34.252 04:25:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:34.252 [2024-11-17 04:25:13.048351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.252 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=159372 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 159372 /var/tmp/bdevperf.sock 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 159372 ']' 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.513 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 [2024-11-17 04:25:13.262797] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:34.513 [2024-11-17 04:25:13.262844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159372 ] 00:08:34.773 [2024-11-17 04:25:13.352065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.773 [2024-11-17 04:25:13.374440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.773 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.773 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:34.773 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.033 Nvme0n1 00:08:35.033 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.293 [ 00:08:35.293 { 00:08:35.293 "name": "Nvme0n1", 00:08:35.293 "aliases": [ 00:08:35.293 "cdd5abbc-891b-4044-aa6c-71d3b4161559" 00:08:35.293 ], 00:08:35.293 "product_name": "NVMe disk", 00:08:35.293 "block_size": 4096, 00:08:35.293 "num_blocks": 38912, 00:08:35.293 "uuid": "cdd5abbc-891b-4044-aa6c-71d3b4161559", 00:08:35.293 "numa_id": 1, 00:08:35.293 "assigned_rate_limits": { 00:08:35.293 "rw_ios_per_sec": 0, 00:08:35.293 "rw_mbytes_per_sec": 0, 00:08:35.293 "r_mbytes_per_sec": 0, 00:08:35.293 "w_mbytes_per_sec": 0 00:08:35.293 }, 00:08:35.293 "claimed": false, 00:08:35.293 "zoned": false, 00:08:35.293 "supported_io_types": { 00:08:35.293 "read": true, 00:08:35.293 "write": true, 00:08:35.293 "unmap": true, 00:08:35.293 "flush": true, 00:08:35.293 "reset": true, 00:08:35.293 "nvme_admin": true, 00:08:35.293 "nvme_io": true, 00:08:35.293 "nvme_io_md": false, 00:08:35.293 "write_zeroes": true, 00:08:35.293 "zcopy": false, 00:08:35.293 "get_zone_info": false, 00:08:35.293 "zone_management": false, 00:08:35.293 "zone_append": false, 00:08:35.293 "compare": true, 00:08:35.293 "compare_and_write": true, 00:08:35.293 "abort": true, 00:08:35.293 "seek_hole": false, 00:08:35.293 "seek_data": false, 00:08:35.293 "copy": true, 00:08:35.293 "nvme_iov_md": false 00:08:35.293 }, 00:08:35.293 "memory_domains": [ 00:08:35.293 { 00:08:35.293 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:35.293 "dma_device_type": 0 00:08:35.293 } 00:08:35.293 ], 00:08:35.293 "driver_specific": { 00:08:35.293 "nvme": [ 00:08:35.293 { 00:08:35.293 "trid": { 00:08:35.293 "trtype": "RDMA", 00:08:35.293 "adrfam": "IPv4", 00:08:35.293 "traddr": "192.168.100.8", 00:08:35.293 "trsvcid": "4420", 00:08:35.293 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.293 }, 00:08:35.293 "ctrlr_data": { 00:08:35.293 "cntlid": 1, 00:08:35.293 "vendor_id": "0x8086", 00:08:35.293 "model_number": "SPDK bdev Controller", 00:08:35.293 "serial_number": "SPDK0", 00:08:35.293 "firmware_revision": "25.01", 00:08:35.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.293 "oacs": { 00:08:35.293 "security": 0, 00:08:35.293 "format": 0, 00:08:35.293 "firmware": 0, 00:08:35.293 "ns_manage": 0 00:08:35.293 }, 00:08:35.293 "multi_ctrlr": true, 00:08:35.293 "ana_reporting": false 00:08:35.293 }, 00:08:35.293 "vs": { 00:08:35.293 "nvme_version": "1.3" 00:08:35.293 }, 00:08:35.293 "ns_data": { 00:08:35.293 "id": 1, 00:08:35.293 "can_share": true 00:08:35.293 } 00:08:35.293 } 00:08:35.293 ], 00:08:35.293 "mp_policy": "active_passive" 00:08:35.293 } 00:08:35.293 } 00:08:35.293 ] 00:08:35.293 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=159589 00:08:35.293 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.293 04:25:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.293 Running I/O for 10 seconds... 00:08:36.233 Latency(us) 00:08:36.233 [2024-11-17T03:25:15.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.233 Nvme0n1 : 1.00 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:08:36.233 [2024-11-17T03:25:15.063Z] =================================================================================================================== 00:08:36.233 [2024-11-17T03:25:15.063Z] Total : 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:08:36.233 00:08:37.173 04:25:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:37.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.433 Nvme0n1 : 2.00 35312.50 137.94 0.00 0.00 0.00 0.00 0.00 00:08:37.433 [2024-11-17T03:25:16.263Z] =================================================================================================================== 00:08:37.433 [2024-11-17T03:25:16.263Z] Total : 35312.50 137.94 0.00 0.00 0.00 0.00 0.00 00:08:37.433 00:08:37.433 true 00:08:37.433 04:25:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:37.433 04:25:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.692 04:25:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.692 04:25:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.692 04:25:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 159589 00:08:38.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.262 Nvme0n1 : 3.00 35371.67 138.17 0.00 0.00 0.00 0.00 0.00 00:08:38.262 [2024-11-17T03:25:17.092Z] =================================================================================================================== 00:08:38.262 [2024-11-17T03:25:17.092Z] Total : 35371.67 138.17 0.00 0.00 0.00 0.00 0.00 00:08:38.262 00:08:39.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.644 Nvme0n1 : 4.00 35496.25 138.66 0.00 0.00 0.00 0.00 0.00 00:08:39.644 [2024-11-17T03:25:18.474Z] =================================================================================================================== 00:08:39.644 [2024-11-17T03:25:18.474Z] Total : 35496.25 138.66 0.00 0.00 0.00 0.00 0.00 00:08:39.644 00:08:40.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.583 Nvme0n1 : 5.00 35564.80 138.93 0.00 0.00 0.00 0.00 0.00 00:08:40.583 [2024-11-17T03:25:19.413Z] =================================================================================================================== 00:08:40.583 [2024-11-17T03:25:19.413Z] Total : 35564.80 138.93 0.00 0.00 0.00 0.00 0.00 00:08:40.583 00:08:41.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.523 Nvme0n1 : 6.00 35615.17 139.12 0.00 0.00 0.00 0.00 0.00 00:08:41.523 [2024-11-17T03:25:20.353Z] =================================================================================================================== 00:08:41.523 [2024-11-17T03:25:20.353Z] Total : 35615.17 139.12 0.00 0.00 0.00 0.00 0.00 00:08:41.523 00:08:42.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.464 Nvme0n1 : 7.00 35569.43 138.94 0.00 0.00 0.00 0.00 0.00 00:08:42.464 [2024-11-17T03:25:21.294Z] =================================================================================================================== 00:08:42.464 [2024-11-17T03:25:21.294Z] Total : 35569.43 138.94 0.00 0.00 0.00 0.00 0.00 00:08:42.464 00:08:43.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.403 Nvme0n1 : 8.00 35605.12 139.08 0.00 0.00 0.00 0.00 0.00 00:08:43.403 [2024-11-17T03:25:22.233Z] =================================================================================================================== 00:08:43.403 [2024-11-17T03:25:22.233Z] Total : 35605.12 139.08 0.00 0.00 0.00 0.00 0.00 00:08:43.403 00:08:44.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.343 Nvme0n1 : 9.00 35638.22 139.21 0.00 0.00 0.00 0.00 0.00 00:08:44.343 [2024-11-17T03:25:23.173Z] =================================================================================================================== 00:08:44.343 [2024-11-17T03:25:23.173Z] Total : 35638.22 139.21 0.00 0.00 0.00 0.00 0.00 00:08:44.343 00:08:45.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.282 Nvme0n1 : 10.00 35666.50 139.32 0.00 0.00 0.00 0.00 0.00 00:08:45.282 [2024-11-17T03:25:24.112Z] =================================================================================================================== 00:08:45.282 [2024-11-17T03:25:24.112Z] Total : 35666.50 139.32 0.00 0.00 0.00 0.00 0.00 00:08:45.282 00:08:45.282 00:08:45.282 Latency(us) 00:08:45.282 [2024-11-17T03:25:24.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.283 Nvme0n1 : 10.00 35665.53 139.32 0.00 0.00 3586.04 2254.44 9175.04 00:08:45.283 [2024-11-17T03:25:24.113Z] =================================================================================================================== 00:08:45.283 [2024-11-17T03:25:24.113Z] Total : 35665.53 139.32 0.00 0.00 3586.04 2254.44 9175.04 00:08:45.283 { 00:08:45.283 "results": [ 00:08:45.283 { 00:08:45.283 "job": "Nvme0n1", 00:08:45.283 "core_mask": "0x2", 00:08:45.283 "workload": "randwrite", 00:08:45.283 "status": "finished", 00:08:45.283 "queue_depth": 128, 00:08:45.283 "io_size": 4096, 00:08:45.283 "runtime": 10.003385, 00:08:45.283 "iops": 35665.527219036354, 00:08:45.283 "mibps": 139.31846569936076, 00:08:45.283 "io_failed": 0, 00:08:45.283 "io_timeout": 0, 00:08:45.283 "avg_latency_us": 3586.042204596721, 00:08:45.283 "min_latency_us": 2254.4384, 00:08:45.283 "max_latency_us": 9175.04 00:08:45.283 } 00:08:45.283 ], 00:08:45.283 "core_count": 1 00:08:45.283 } 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 159372 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 159372 ']' 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 159372 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.283 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159372 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159372' 00:08:45.542 killing process with pid 159372 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 159372 00:08:45.542 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.542 00:08:45.542 Latency(us) 00:08:45.542 [2024-11-17T03:25:24.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.542 [2024-11-17T03:25:24.372Z] =================================================================================================================== 00:08:45.542 [2024-11-17T03:25:24.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 159372 00:08:45.542 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:45.802 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.062 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:46.062 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.322 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.322 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:46.322 04:25:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.322 [2024-11-17 04:25:25.085917] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.322 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:46.582 request: 00:08:46.582 { 00:08:46.582 "uuid": "90b9c240-323d-421a-99ea-c466ade6eede", 00:08:46.582 "method": "bdev_lvol_get_lvstores", 00:08:46.582 "req_id": 1 00:08:46.582 } 00:08:46.582 Got JSON-RPC error response 00:08:46.582 response: 00:08:46.582 { 00:08:46.582 "code": -19, 00:08:46.582 "message": "No such device" 00:08:46.582 } 00:08:46.582 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:46.582 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.582 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.582 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.582 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.842 aio_bdev 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cdd5abbc-891b-4044-aa6c-71d3b4161559 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cdd5abbc-891b-4044-aa6c-71d3b4161559 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.842 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.102 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cdd5abbc-891b-4044-aa6c-71d3b4161559 -t 2000 00:08:47.102 [ 00:08:47.102 { 00:08:47.102 "name": "cdd5abbc-891b-4044-aa6c-71d3b4161559", 00:08:47.102 "aliases": [ 00:08:47.102 "lvs/lvol" 00:08:47.102 ], 00:08:47.102 "product_name": "Logical Volume", 00:08:47.102 "block_size": 4096, 00:08:47.102 "num_blocks": 38912, 00:08:47.102 "uuid": "cdd5abbc-891b-4044-aa6c-71d3b4161559", 00:08:47.102 "assigned_rate_limits": { 00:08:47.102 "rw_ios_per_sec": 0, 00:08:47.102 "rw_mbytes_per_sec": 0, 00:08:47.102 "r_mbytes_per_sec": 0, 00:08:47.102 "w_mbytes_per_sec": 0 00:08:47.102 }, 00:08:47.102 "claimed": false, 00:08:47.102 "zoned": false, 00:08:47.102 "supported_io_types": { 00:08:47.102 "read": true, 00:08:47.102 "write": true, 00:08:47.102 "unmap": true, 00:08:47.102 "flush": false, 00:08:47.102 "reset": true, 00:08:47.102 "nvme_admin": false, 00:08:47.102 "nvme_io": false, 00:08:47.102 "nvme_io_md": false, 00:08:47.102 "write_zeroes": true, 00:08:47.102 "zcopy": false, 00:08:47.102 "get_zone_info": false, 00:08:47.102 "zone_management": false, 00:08:47.102 "zone_append": false, 00:08:47.102 "compare": false, 00:08:47.102 "compare_and_write": false, 00:08:47.102 "abort": false, 00:08:47.102 "seek_hole": true, 00:08:47.102 "seek_data": true, 00:08:47.102 "copy": false, 00:08:47.102 "nvme_iov_md": false 00:08:47.102 }, 00:08:47.102 "driver_specific": { 00:08:47.102 "lvol": { 00:08:47.102 "lvol_store_uuid": "90b9c240-323d-421a-99ea-c466ade6eede", 00:08:47.102 "base_bdev": "aio_bdev", 00:08:47.102 "thin_provision": false, 00:08:47.102 "num_allocated_clusters": 38, 00:08:47.102 "snapshot": false, 00:08:47.102 "clone": false, 00:08:47.102 "esnap_clone": false 00:08:47.102 } 00:08:47.102 } 00:08:47.102 } 00:08:47.102 ] 00:08:47.102 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:47.102 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:47.102 04:25:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.363 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.363 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:47.363 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.622 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.622 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cdd5abbc-891b-4044-aa6c-71d3b4161559 00:08:47.622 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90b9c240-323d-421a-99ea-c466ade6eede 00:08:47.883 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.143 00:08:48.143 real 0m15.521s 00:08:48.143 user 0m15.286s 00:08:48.143 sys 0m1.208s 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.143 ************************************ 00:08:48.143 END TEST lvs_grow_clean 00:08:48.143 ************************************ 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.143 ************************************ 00:08:48.143 START TEST lvs_grow_dirty 00:08:48.143 ************************************ 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.143 04:25:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.403 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.403 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.663 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:08:48.663 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:08:48.663 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e lvol 150 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.923 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.183 [2024-11-17 04:25:27.912889] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.183 [2024-11-17 04:25:27.912962] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.183 true 00:08:49.183 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:08:49.183 04:25:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.443 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.443 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.703 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:08:49.703 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:49.963 [2024-11-17 04:25:28.659276] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:49.963 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=162109 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 162109 /var/tmp/bdevperf.sock 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 162109 ']' 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.223 04:25:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.223 [2024-11-17 04:25:28.890222] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:50.223 [2024-11-17 04:25:28.890271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162109 ] 00:08:50.223 [2024-11-17 04:25:28.980858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.223 [2024-11-17 04:25:29.003191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.488 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.488 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:50.489 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:50.750 Nvme0n1 00:08:50.750 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:50.750 [ 00:08:50.750 { 00:08:50.751 "name": "Nvme0n1", 00:08:50.751 "aliases": [ 00:08:50.751 "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b" 00:08:50.751 ], 00:08:50.751 "product_name": "NVMe disk", 00:08:50.751 "block_size": 4096, 00:08:50.751 "num_blocks": 38912, 00:08:50.751 "uuid": "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b", 00:08:50.751 "numa_id": 1, 00:08:50.751 "assigned_rate_limits": { 00:08:50.751 "rw_ios_per_sec": 0, 00:08:50.751 "rw_mbytes_per_sec": 0, 00:08:50.751 "r_mbytes_per_sec": 0, 00:08:50.751 "w_mbytes_per_sec": 0 00:08:50.751 }, 00:08:50.751 "claimed": false, 00:08:50.751 "zoned": false, 00:08:50.751 "supported_io_types": { 00:08:50.751 "read": true, 00:08:50.751 "write": true, 00:08:50.751 "unmap": true, 00:08:50.751 "flush": true, 00:08:50.751 "reset": true, 00:08:50.751 "nvme_admin": true, 00:08:50.751 "nvme_io": true, 00:08:50.751 "nvme_io_md": false, 00:08:50.751 "write_zeroes": true, 00:08:50.751 "zcopy": false, 00:08:50.751 "get_zone_info": false, 00:08:50.751 "zone_management": false, 00:08:50.751 "zone_append": false, 00:08:50.751 "compare": true, 00:08:50.751 "compare_and_write": true, 00:08:50.751 "abort": true, 00:08:50.751 "seek_hole": false, 00:08:50.751 "seek_data": false, 00:08:50.751 "copy": true, 00:08:50.751 "nvme_iov_md": false 00:08:50.751 }, 00:08:50.751 "memory_domains": [ 00:08:50.751 { 00:08:50.751 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:50.751 "dma_device_type": 0 00:08:50.751 } 00:08:50.751 ], 00:08:50.751 "driver_specific": { 00:08:50.751 "nvme": [ 00:08:50.751 { 00:08:50.751 "trid": { 00:08:50.751 "trtype": "RDMA", 00:08:50.751 "adrfam": "IPv4", 00:08:50.751 "traddr": "192.168.100.8", 00:08:50.751 "trsvcid": "4420", 00:08:50.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:50.751 }, 00:08:50.751 "ctrlr_data": { 00:08:50.751 "cntlid": 1, 00:08:50.751 "vendor_id": "0x8086", 00:08:50.751 "model_number": "SPDK bdev Controller", 00:08:50.751 "serial_number": "SPDK0", 00:08:50.751 "firmware_revision": "25.01", 00:08:50.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:50.751 "oacs": { 00:08:50.751 "security": 0, 00:08:50.751 "format": 0, 00:08:50.751 "firmware": 0, 00:08:50.751 "ns_manage": 0 00:08:50.751 }, 00:08:50.751 "multi_ctrlr": true, 00:08:50.751 "ana_reporting": false 00:08:50.751 }, 00:08:50.751 "vs": { 00:08:50.751 "nvme_version": "1.3" 00:08:50.751 }, 00:08:50.751 "ns_data": { 00:08:50.751 "id": 1, 00:08:50.751 "can_share": true 00:08:50.751 } 00:08:50.751 } 00:08:50.751 ], 00:08:50.751 "mp_policy": "active_passive" 00:08:50.751 } 00:08:50.751 } 00:08:50.751 ] 00:08:50.751 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=162366 00:08:50.751 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:50.751 04:25:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.010 Running I/O for 10 seconds... 00:08:51.950 Latency(us) 00:08:51.950 [2024-11-17T03:25:30.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.950 Nvme0n1 : 1.00 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:08:51.950 [2024-11-17T03:25:30.780Z] =================================================================================================================== 00:08:51.950 [2024-11-17T03:25:30.780Z] Total : 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:08:51.950 00:08:52.890 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:08:52.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.890 Nvme0n1 : 2.00 34913.00 136.38 0.00 0.00 0.00 0.00 0.00 00:08:52.890 [2024-11-17T03:25:31.720Z] =================================================================================================================== 00:08:52.890 [2024-11-17T03:25:31.720Z] Total : 34913.00 136.38 0.00 0.00 0.00 0.00 0.00 00:08:52.890 00:08:53.150 true 00:08:53.150 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:08:53.150 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:53.150 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:53.150 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:53.150 04:25:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 162366 00:08:54.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.091 Nvme0n1 : 3.00 35135.67 137.25 0.00 0.00 0.00 0.00 0.00 00:08:54.091 [2024-11-17T03:25:32.921Z] =================================================================================================================== 00:08:54.091 [2024-11-17T03:25:32.921Z] Total : 35135.67 137.25 0.00 0.00 0.00 0.00 0.00 00:08:54.091 00:08:55.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.166 Nvme0n1 : 4.00 35327.50 138.00 0.00 0.00 0.00 0.00 0.00 00:08:55.166 [2024-11-17T03:25:33.996Z] =================================================================================================================== 00:08:55.166 [2024-11-17T03:25:33.996Z] Total : 35327.50 138.00 0.00 0.00 0.00 0.00 0.00 00:08:55.166 00:08:56.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.105 Nvme0n1 : 5.00 35442.80 138.45 0.00 0.00 0.00 0.00 0.00 00:08:56.105 [2024-11-17T03:25:34.935Z] =================================================================================================================== 00:08:56.105 [2024-11-17T03:25:34.935Z] Total : 35442.80 138.45 0.00 0.00 0.00 0.00 0.00 00:08:56.105 00:08:57.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.045 Nvme0n1 : 6.00 35524.83 138.77 0.00 0.00 0.00 0.00 0.00 00:08:57.045 [2024-11-17T03:25:35.875Z] =================================================================================================================== 00:08:57.045 [2024-11-17T03:25:35.875Z] Total : 35524.83 138.77 0.00 0.00 0.00 0.00 0.00 00:08:57.045 00:08:57.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.985 Nvme0n1 : 7.00 35601.86 139.07 0.00 0.00 0.00 0.00 0.00 00:08:57.985 [2024-11-17T03:25:36.815Z] =================================================================================================================== 00:08:57.985 [2024-11-17T03:25:36.815Z] Total : 35601.86 139.07 0.00 0.00 0.00 0.00 0.00 00:08:57.985 00:08:58.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.924 Nvme0n1 : 8.00 35651.62 139.26 0.00 0.00 0.00 0.00 0.00 00:08:58.924 [2024-11-17T03:25:37.754Z] =================================================================================================================== 00:08:58.924 [2024-11-17T03:25:37.754Z] Total : 35651.62 139.26 0.00 0.00 0.00 0.00 0.00 00:08:58.924 00:08:59.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.862 Nvme0n1 : 9.00 35694.44 139.43 0.00 0.00 0.00 0.00 0.00 00:08:59.862 [2024-11-17T03:25:38.692Z] =================================================================================================================== 00:08:59.862 [2024-11-17T03:25:38.692Z] Total : 35694.44 139.43 0.00 0.00 0.00 0.00 0.00 00:08:59.862 00:09:01.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.244 Nvme0n1 : 10.00 35730.60 139.57 0.00 0.00 0.00 0.00 0.00 00:09:01.244 [2024-11-17T03:25:40.074Z] =================================================================================================================== 00:09:01.244 [2024-11-17T03:25:40.074Z] Total : 35730.60 139.57 0.00 0.00 0.00 0.00 0.00 00:09:01.244 00:09:01.244 00:09:01.244 Latency(us) 00:09:01.244 [2024-11-17T03:25:40.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.244 Nvme0n1 : 10.00 35729.48 139.57 0.00 0.00 3579.63 2555.90 15833.50 00:09:01.244 [2024-11-17T03:25:40.074Z] =================================================================================================================== 00:09:01.244 [2024-11-17T03:25:40.074Z] Total : 35729.48 139.57 0.00 0.00 3579.63 2555.90 15833.50 00:09:01.244 { 00:09:01.244 "results": [ 00:09:01.244 { 00:09:01.244 "job": "Nvme0n1", 00:09:01.244 "core_mask": "0x2", 00:09:01.244 "workload": "randwrite", 00:09:01.244 "status": "finished", 00:09:01.244 "queue_depth": 128, 00:09:01.244 "io_size": 4096, 00:09:01.244 "runtime": 10.003251, 00:09:01.244 "iops": 35729.48434463956, 00:09:01.244 "mibps": 139.56829822124828, 00:09:01.244 "io_failed": 0, 00:09:01.244 "io_timeout": 0, 00:09:01.244 "avg_latency_us": 3579.6346258083827, 00:09:01.244 "min_latency_us": 2555.904, 00:09:01.244 "max_latency_us": 15833.4976 00:09:01.244 } 00:09:01.244 ], 00:09:01.244 "core_count": 1 00:09:01.244 } 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 162109 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 162109 ']' 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 162109 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162109 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162109' 00:09:01.244 killing process with pid 162109 00:09:01.244 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 162109 00:09:01.244 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.244 00:09:01.244 Latency(us) 00:09:01.244 [2024-11-17T03:25:40.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.244 [2024-11-17T03:25:40.074Z] =================================================================================================================== 00:09:01.245 [2024-11-17T03:25:40.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.245 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 162109 00:09:01.245 04:25:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:01.504 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 158960 00:09:01.764 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 158960 00:09:02.024 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 158960 Killed "${NVMF_APP[@]}" "$@" 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=164244 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 164244 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 164244 ']' 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.024 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.024 [2024-11-17 04:25:40.684552] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:02.025 [2024-11-17 04:25:40.684607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.025 [2024-11-17 04:25:40.761678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.025 [2024-11-17 04:25:40.782738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.025 [2024-11-17 04:25:40.782773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.025 [2024-11-17 04:25:40.782782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.025 [2024-11-17 04:25:40.782791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.025 [2024-11-17 04:25:40.782814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.025 [2024-11-17 04:25:40.783400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.284 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.284 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:02.285 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.285 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.285 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.285 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.285 04:25:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.285 [2024-11-17 04:25:41.096608] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:02.285 [2024-11-17 04:25:41.096705] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:02.285 [2024-11-17 04:25:41.096733] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.545 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b -t 2000 00:09:02.804 [ 00:09:02.805 { 00:09:02.805 "name": "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b", 00:09:02.805 "aliases": [ 00:09:02.805 "lvs/lvol" 00:09:02.805 ], 00:09:02.805 "product_name": "Logical Volume", 00:09:02.805 "block_size": 4096, 00:09:02.805 "num_blocks": 38912, 00:09:02.805 "uuid": "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b", 00:09:02.805 "assigned_rate_limits": { 00:09:02.805 "rw_ios_per_sec": 0, 00:09:02.805 "rw_mbytes_per_sec": 0, 00:09:02.805 "r_mbytes_per_sec": 0, 00:09:02.805 "w_mbytes_per_sec": 0 00:09:02.805 }, 00:09:02.805 "claimed": false, 00:09:02.805 "zoned": false, 00:09:02.805 "supported_io_types": { 00:09:02.805 "read": true, 00:09:02.805 "write": true, 00:09:02.805 "unmap": true, 00:09:02.805 "flush": false, 00:09:02.805 "reset": true, 00:09:02.805 "nvme_admin": false, 00:09:02.805 "nvme_io": false, 00:09:02.805 "nvme_io_md": false, 00:09:02.805 "write_zeroes": true, 00:09:02.805 "zcopy": false, 00:09:02.805 "get_zone_info": false, 00:09:02.805 "zone_management": false, 00:09:02.805 "zone_append": false, 00:09:02.805 "compare": false, 00:09:02.805 "compare_and_write": false, 00:09:02.805 "abort": false, 00:09:02.805 "seek_hole": true, 00:09:02.805 "seek_data": true, 00:09:02.805 "copy": false, 00:09:02.805 "nvme_iov_md": false 00:09:02.805 }, 00:09:02.805 "driver_specific": { 00:09:02.805 "lvol": { 00:09:02.805 "lvol_store_uuid": "622f437c-8cec-4e6a-9932-c7d5a9d7de5e", 00:09:02.805 "base_bdev": "aio_bdev", 00:09:02.805 "thin_provision": false, 00:09:02.805 "num_allocated_clusters": 38, 00:09:02.805 "snapshot": false, 00:09:02.805 "clone": false, 00:09:02.805 "esnap_clone": false 00:09:02.805 } 00:09:02.805 } 00:09:02.805 } 00:09:02.805 ] 00:09:02.805 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:02.805 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:02.805 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:03.065 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:03.065 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:03.065 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:03.065 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:03.065 04:25:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.325 [2024-11-17 04:25:42.049286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.325 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:03.585 request: 00:09:03.585 { 00:09:03.585 "uuid": "622f437c-8cec-4e6a-9932-c7d5a9d7de5e", 00:09:03.585 "method": "bdev_lvol_get_lvstores", 00:09:03.585 "req_id": 1 00:09:03.585 } 00:09:03.585 Got JSON-RPC error response 00:09:03.585 response: 00:09:03.585 { 00:09:03.585 "code": -19, 00:09:03.585 "message": "No such device" 00:09:03.585 } 00:09:03.585 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:03.585 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.585 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.585 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.585 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.845 aio_bdev 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:03.845 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b -t 2000 00:09:04.105 [ 00:09:04.105 { 00:09:04.105 "name": "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b", 00:09:04.105 "aliases": [ 00:09:04.105 "lvs/lvol" 00:09:04.105 ], 00:09:04.105 "product_name": "Logical Volume", 00:09:04.105 "block_size": 4096, 00:09:04.105 "num_blocks": 38912, 00:09:04.105 "uuid": "93c6613e-c8ea-4ebb-9ba6-0ea621c7177b", 00:09:04.105 "assigned_rate_limits": { 00:09:04.105 "rw_ios_per_sec": 0, 00:09:04.105 "rw_mbytes_per_sec": 0, 00:09:04.105 "r_mbytes_per_sec": 0, 00:09:04.105 "w_mbytes_per_sec": 0 00:09:04.105 }, 00:09:04.105 "claimed": false, 00:09:04.105 "zoned": false, 00:09:04.105 "supported_io_types": { 00:09:04.105 "read": true, 00:09:04.105 "write": true, 00:09:04.105 "unmap": true, 00:09:04.105 "flush": false, 00:09:04.105 "reset": true, 00:09:04.105 "nvme_admin": false, 00:09:04.105 "nvme_io": false, 00:09:04.105 "nvme_io_md": false, 00:09:04.105 "write_zeroes": true, 00:09:04.105 "zcopy": false, 00:09:04.105 "get_zone_info": false, 00:09:04.105 "zone_management": false, 00:09:04.105 "zone_append": false, 00:09:04.105 "compare": false, 00:09:04.105 "compare_and_write": false, 00:09:04.105 "abort": false, 00:09:04.105 "seek_hole": true, 00:09:04.105 "seek_data": true, 00:09:04.105 "copy": false, 00:09:04.105 "nvme_iov_md": false 00:09:04.105 }, 00:09:04.105 "driver_specific": { 00:09:04.105 "lvol": { 00:09:04.105 "lvol_store_uuid": "622f437c-8cec-4e6a-9932-c7d5a9d7de5e", 00:09:04.105 "base_bdev": "aio_bdev", 00:09:04.105 "thin_provision": false, 00:09:04.105 "num_allocated_clusters": 38, 00:09:04.105 "snapshot": false, 00:09:04.105 "clone": false, 00:09:04.105 "esnap_clone": false 00:09:04.105 } 00:09:04.105 } 00:09:04.105 } 00:09:04.105 ] 00:09:04.105 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.105 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:04.105 04:25:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.365 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.365 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:04.365 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.625 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.625 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 93c6613e-c8ea-4ebb-9ba6-0ea621c7177b 00:09:04.625 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 622f437c-8cec-4e6a-9932-c7d5a9d7de5e 00:09:04.885 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.145 00:09:05.145 real 0m16.901s 00:09:05.145 user 0m44.178s 00:09:05.145 sys 0m3.199s 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.145 ************************************ 00:09:05.145 END TEST lvs_grow_dirty 00:09:05.145 ************************************ 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:05.145 nvmf_trace.0 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.145 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:05.145 rmmod nvme_rdma 00:09:05.145 rmmod nvme_fabrics 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 164244 ']' 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 164244 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 164244 ']' 00:09:05.405 04:25:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 164244 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164244 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164244' 00:09:05.405 killing process with pid 164244 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 164244 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 164244 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:05.405 00:09:05.405 real 0m41.055s 00:09:05.405 user 1m5.521s 00:09:05.405 sys 0m10.472s 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.405 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.405 ************************************ 00:09:05.405 END TEST nvmf_lvs_grow 00:09:05.405 ************************************ 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.665 ************************************ 00:09:05.665 START TEST nvmf_bdev_io_wait 00:09:05.665 ************************************ 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:05.665 * Looking for test storage... 00:09:05.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.665 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.666 --rc genhtml_branch_coverage=1 00:09:05.666 --rc genhtml_function_coverage=1 00:09:05.666 --rc genhtml_legend=1 00:09:05.666 --rc geninfo_all_blocks=1 00:09:05.666 --rc geninfo_unexecuted_blocks=1 00:09:05.666 00:09:05.666 ' 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.666 --rc genhtml_branch_coverage=1 00:09:05.666 --rc genhtml_function_coverage=1 00:09:05.666 --rc genhtml_legend=1 00:09:05.666 --rc geninfo_all_blocks=1 00:09:05.666 --rc geninfo_unexecuted_blocks=1 00:09:05.666 00:09:05.666 ' 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.666 --rc genhtml_branch_coverage=1 00:09:05.666 --rc genhtml_function_coverage=1 00:09:05.666 --rc genhtml_legend=1 00:09:05.666 --rc geninfo_all_blocks=1 00:09:05.666 --rc geninfo_unexecuted_blocks=1 00:09:05.666 00:09:05.666 ' 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.666 --rc genhtml_branch_coverage=1 00:09:05.666 --rc genhtml_function_coverage=1 00:09:05.666 --rc genhtml_legend=1 00:09:05.666 --rc geninfo_all_blocks=1 00:09:05.666 --rc geninfo_unexecuted_blocks=1 00:09:05.666 00:09:05.666 ' 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.666 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.926 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.927 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.927 04:25:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:14.062 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:14.062 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:14.062 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:14.062 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.062 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:14.063 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.063 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:14.063 altname enp217s0f0np0 00:09:14.063 altname ens818f0np0 00:09:14.063 inet 192.168.100.8/24 scope global mlx_0_0 00:09:14.063 valid_lft forever preferred_lft forever 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:14.063 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.063 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:14.063 altname enp217s0f1np1 00:09:14.063 altname ens818f1np1 00:09:14.063 inet 192.168.100.9/24 scope global mlx_0_1 00:09:14.063 valid_lft forever preferred_lft forever 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:14.063 192.168.100.9' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:14.063 192.168.100.9' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:14.063 192.168.100.9' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:14.063 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=168278 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 168278 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 168278 ']' 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.064 04:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 [2024-11-17 04:25:51.865157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:14.064 [2024-11-17 04:25:51.865218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.064 [2024-11-17 04:25:51.957458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.064 [2024-11-17 04:25:51.980894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.064 [2024-11-17 04:25:51.980936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.064 [2024-11-17 04:25:51.980946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.064 [2024-11-17 04:25:51.980954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.064 [2024-11-17 04:25:51.980960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.064 [2024-11-17 04:25:51.982578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.064 [2024-11-17 04:25:51.982618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.064 [2024-11-17 04:25:51.982728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.064 [2024-11-17 04:25:51.982729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 [2024-11-17 04:25:52.168377] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21f6ba0/0x21fb050) succeed. 00:09:14.064 [2024-11-17 04:25:52.177773] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21f81e0/0x223c6f0) succeed. 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 Malloc0 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.064 [2024-11-17 04:25:52.371186] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=168308 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=168310 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.064 { 00:09:14.064 "params": { 00:09:14.064 "name": "Nvme$subsystem", 00:09:14.064 "trtype": "$TEST_TRANSPORT", 00:09:14.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.064 "adrfam": "ipv4", 00:09:14.064 "trsvcid": "$NVMF_PORT", 00:09:14.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.064 "hdgst": ${hdgst:-false}, 00:09:14.064 "ddgst": ${ddgst:-false} 00:09:14.064 }, 00:09:14.064 "method": "bdev_nvme_attach_controller" 00:09:14.064 } 00:09:14.064 EOF 00:09:14.064 )") 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=168312 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.064 { 00:09:14.064 "params": { 00:09:14.064 "name": "Nvme$subsystem", 00:09:14.064 "trtype": "$TEST_TRANSPORT", 00:09:14.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.064 "adrfam": "ipv4", 00:09:14.064 "trsvcid": "$NVMF_PORT", 00:09:14.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.064 "hdgst": ${hdgst:-false}, 00:09:14.064 "ddgst": ${ddgst:-false} 00:09:14.064 }, 00:09:14.064 "method": "bdev_nvme_attach_controller" 00:09:14.064 } 00:09:14.064 EOF 00:09:14.064 )") 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=168315 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.064 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.065 { 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme$subsystem", 00:09:14.065 "trtype": "$TEST_TRANSPORT", 00:09:14.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "$NVMF_PORT", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.065 "hdgst": ${hdgst:-false}, 00:09:14.065 "ddgst": ${ddgst:-false} 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 } 00:09:14.065 EOF 00:09:14.065 )") 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.065 { 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme$subsystem", 00:09:14.065 "trtype": "$TEST_TRANSPORT", 00:09:14.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "$NVMF_PORT", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.065 "hdgst": ${hdgst:-false}, 00:09:14.065 "ddgst": ${ddgst:-false} 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 } 00:09:14.065 EOF 00:09:14.065 )") 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 168308 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme1", 00:09:14.065 "trtype": "rdma", 00:09:14.065 "traddr": "192.168.100.8", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "4420", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.065 "hdgst": false, 00:09:14.065 "ddgst": false 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 }' 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme1", 00:09:14.065 "trtype": "rdma", 00:09:14.065 "traddr": "192.168.100.8", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "4420", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.065 "hdgst": false, 00:09:14.065 "ddgst": false 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 }' 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme1", 00:09:14.065 "trtype": "rdma", 00:09:14.065 "traddr": "192.168.100.8", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "4420", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.065 "hdgst": false, 00:09:14.065 "ddgst": false 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 }' 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:14.065 04:25:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.065 "params": { 00:09:14.065 "name": "Nvme1", 00:09:14.065 "trtype": "rdma", 00:09:14.065 "traddr": "192.168.100.8", 00:09:14.065 "adrfam": "ipv4", 00:09:14.065 "trsvcid": "4420", 00:09:14.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.065 "hdgst": false, 00:09:14.065 "ddgst": false 00:09:14.065 }, 00:09:14.065 "method": "bdev_nvme_attach_controller" 00:09:14.065 }' 00:09:14.065 [2024-11-17 04:25:52.423239] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:14.065 [2024-11-17 04:25:52.423294] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:14.065 [2024-11-17 04:25:52.424786] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:14.065 [2024-11-17 04:25:52.424837] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:14.065 [2024-11-17 04:25:52.425620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:14.065 [2024-11-17 04:25:52.425669] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:14.065 [2024-11-17 04:25:52.426104] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:14.065 [2024-11-17 04:25:52.426152] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:14.065 [2024-11-17 04:25:52.615861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.065 [2024-11-17 04:25:52.631226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.065 [2024-11-17 04:25:52.716474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.065 [2024-11-17 04:25:52.732110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:14.065 [2024-11-17 04:25:52.817088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.065 [2024-11-17 04:25:52.836974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:14.065 [2024-11-17 04:25:52.879542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.326 [2024-11-17 04:25:52.894987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:14.326 Running I/O for 1 seconds... 00:09:14.326 Running I/O for 1 seconds... 00:09:14.326 Running I/O for 1 seconds... 00:09:14.326 Running I/O for 1 seconds... 00:09:15.267 17196.00 IOPS, 67.17 MiB/s [2024-11-17T03:25:54.097Z] 17759.00 IOPS, 69.37 MiB/s [2024-11-17T03:25:54.097Z] 14553.00 IOPS, 56.85 MiB/s 00:09:15.267 Latency(us) 00:09:15.267 [2024-11-17T03:25:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.267 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:15.267 Nvme1n1 : 1.01 17235.25 67.33 0.00 0.00 7403.21 4561.31 15414.07 00:09:15.267 [2024-11-17T03:25:54.097Z] =================================================================================================================== 00:09:15.267 [2024-11-17T03:25:54.097Z] Total : 17235.25 67.33 0.00 0.00 7403.21 4561.31 15414.07 00:09:15.267 00:09:15.267 Latency(us) 00:09:15.267 [2024-11-17T03:25:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.267 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:15.267 Nvme1n1 : 1.01 17799.96 69.53 0.00 0.00 7170.31 4430.23 16043.21 00:09:15.267 [2024-11-17T03:25:54.097Z] =================================================================================================================== 00:09:15.267 [2024-11-17T03:25:54.097Z] Total : 17799.96 69.53 0.00 0.00 7170.31 4430.23 16043.21 00:09:15.267 00:09:15.267 Latency(us) 00:09:15.267 [2024-11-17T03:25:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.267 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:15.267 Nvme1n1 : 1.01 14601.25 57.04 0.00 0.00 8739.96 5400.17 18140.36 00:09:15.267 [2024-11-17T03:25:54.097Z] =================================================================================================================== 00:09:15.267 [2024-11-17T03:25:54.097Z] Total : 14601.25 57.04 0.00 0.00 8739.96 5400.17 18140.36 00:09:15.267 261720.00 IOPS, 1022.34 MiB/s 00:09:15.267 Latency(us) 00:09:15.267 [2024-11-17T03:25:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.267 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:15.267 Nvme1n1 : 1.00 261336.78 1020.85 0.00 0.00 486.63 212.99 1848.12 00:09:15.267 [2024-11-17T03:25:54.097Z] =================================================================================================================== 00:09:15.267 [2024-11-17T03:25:54.097Z] Total : 261336.78 1020.85 0.00 0.00 486.63 212.99 1848.12 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 168310 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 168312 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 168315 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.527 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:15.528 rmmod nvme_rdma 00:09:15.528 rmmod nvme_fabrics 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 168278 ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 168278 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 168278 ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 168278 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168278 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168278' 00:09:15.528 killing process with pid 168278 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 168278 00:09:15.528 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 168278 00:09:15.788 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.788 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:15.788 00:09:15.788 real 0m10.285s 00:09:15.788 user 0m17.438s 00:09:15.788 sys 0m6.969s 00:09:15.788 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.788 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 ************************************ 00:09:15.789 END TEST nvmf_bdev_io_wait 00:09:15.789 ************************************ 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.049 ************************************ 00:09:16.049 START TEST nvmf_queue_depth 00:09:16.049 ************************************ 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:16.049 * Looking for test storage... 00:09:16.049 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.049 --rc genhtml_branch_coverage=1 00:09:16.049 --rc genhtml_function_coverage=1 00:09:16.049 --rc genhtml_legend=1 00:09:16.049 --rc geninfo_all_blocks=1 00:09:16.049 --rc geninfo_unexecuted_blocks=1 00:09:16.049 00:09:16.049 ' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.049 --rc genhtml_branch_coverage=1 00:09:16.049 --rc genhtml_function_coverage=1 00:09:16.049 --rc genhtml_legend=1 00:09:16.049 --rc geninfo_all_blocks=1 00:09:16.049 --rc geninfo_unexecuted_blocks=1 00:09:16.049 00:09:16.049 ' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.049 --rc genhtml_branch_coverage=1 00:09:16.049 --rc genhtml_function_coverage=1 00:09:16.049 --rc genhtml_legend=1 00:09:16.049 --rc geninfo_all_blocks=1 00:09:16.049 --rc geninfo_unexecuted_blocks=1 00:09:16.049 00:09:16.049 ' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.049 --rc genhtml_branch_coverage=1 00:09:16.049 --rc genhtml_function_coverage=1 00:09:16.049 --rc genhtml_legend=1 00:09:16.049 --rc geninfo_all_blocks=1 00:09:16.049 --rc geninfo_unexecuted_blocks=1 00:09:16.049 00:09:16.049 ' 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.049 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:16.309 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.310 04:25:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:24.461 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:24.461 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:24.461 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:24.461 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:24.461 04:26:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.461 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:24.462 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:24.462 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:24.462 altname enp217s0f0np0 00:09:24.462 altname ens818f0np0 00:09:24.462 inet 192.168.100.8/24 scope global mlx_0_0 00:09:24.462 valid_lft forever preferred_lft forever 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:24.462 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:24.462 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:24.462 altname enp217s0f1np1 00:09:24.462 altname ens818f1np1 00:09:24.462 inet 192.168.100.9/24 scope global mlx_0_1 00:09:24.462 valid_lft forever preferred_lft forever 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:24.462 192.168.100.9' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:24.462 192.168.100.9' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:24.462 192.168.100.9' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=172165 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 172165 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 172165 ']' 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.462 [2024-11-17 04:26:02.267377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:24.462 [2024-11-17 04:26:02.267429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.462 [2024-11-17 04:26:02.361641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.462 [2024-11-17 04:26:02.382397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.462 [2024-11-17 04:26:02.382435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.462 [2024-11-17 04:26:02.382444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.462 [2024-11-17 04:26:02.382453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.462 [2024-11-17 04:26:02.382459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.462 [2024-11-17 04:26:02.383055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.462 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 [2024-11-17 04:26:02.540166] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x913f50/0x918400) succeed. 00:09:24.463 [2024-11-17 04:26:02.549221] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9153b0/0x959aa0) succeed. 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 Malloc0 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 [2024-11-17 04:26:02.646457] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=172315 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 172315 /var/tmp/bdevperf.sock 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 172315 ']' 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:24.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 [2024-11-17 04:26:02.695391] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:24.463 [2024-11-17 04:26:02.695436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172315 ] 00:09:24.463 [2024-11-17 04:26:02.786059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.463 [2024-11-17 04:26:02.808103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 NVMe0n1 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.463 04:26:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.463 Running I/O for 10 seconds... 00:09:26.341 17408.00 IOPS, 68.00 MiB/s [2024-11-17T03:26:06.111Z] 17775.00 IOPS, 69.43 MiB/s [2024-11-17T03:26:07.492Z] 17802.33 IOPS, 69.54 MiB/s [2024-11-17T03:26:08.429Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-17T03:26:09.484Z] 17987.20 IOPS, 70.26 MiB/s [2024-11-17T03:26:10.422Z] 17969.00 IOPS, 70.19 MiB/s [2024-11-17T03:26:11.363Z] 17993.14 IOPS, 70.29 MiB/s [2024-11-17T03:26:12.301Z] 17925.00 IOPS, 70.02 MiB/s [2024-11-17T03:26:13.239Z] 17976.89 IOPS, 70.22 MiB/s [2024-11-17T03:26:13.239Z] 17974.20 IOPS, 70.21 MiB/s 00:09:34.409 Latency(us) 00:09:34.409 [2024-11-17T03:26:13.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.409 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:34.409 Verification LBA range: start 0x0 length 0x4000 00:09:34.409 NVMe0n1 : 10.03 17991.47 70.28 0.00 0.00 56752.57 6370.10 36700.16 00:09:34.409 [2024-11-17T03:26:13.239Z] =================================================================================================================== 00:09:34.409 [2024-11-17T03:26:13.239Z] Total : 17991.47 70.28 0.00 0.00 56752.57 6370.10 36700.16 00:09:34.409 { 00:09:34.409 "results": [ 00:09:34.409 { 00:09:34.409 "job": "NVMe0n1", 00:09:34.409 "core_mask": "0x1", 00:09:34.409 "workload": "verify", 00:09:34.409 "status": "finished", 00:09:34.409 "verify_range": { 00:09:34.409 "start": 0, 00:09:34.409 "length": 16384 00:09:34.409 }, 00:09:34.409 "queue_depth": 1024, 00:09:34.409 "io_size": 4096, 00:09:34.409 "runtime": 10.033031, 00:09:34.409 "iops": 17991.472367622508, 00:09:34.409 "mibps": 70.27918893602542, 00:09:34.409 "io_failed": 0, 00:09:34.410 "io_timeout": 0, 00:09:34.410 "avg_latency_us": 56752.573007643936, 00:09:34.410 "min_latency_us": 6370.0992, 00:09:34.410 "max_latency_us": 36700.16 00:09:34.410 } 00:09:34.410 ], 00:09:34.410 "core_count": 1 00:09:34.410 } 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 172315 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 172315 ']' 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 172315 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172315 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172315' 00:09:34.410 killing process with pid 172315 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 172315 00:09:34.410 Received shutdown signal, test time was about 10.000000 seconds 00:09:34.410 00:09:34.410 Latency(us) 00:09:34.410 [2024-11-17T03:26:13.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.410 [2024-11-17T03:26:13.240Z] =================================================================================================================== 00:09:34.410 [2024-11-17T03:26:13.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:34.410 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 172315 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:34.676 rmmod nvme_rdma 00:09:34.676 rmmod nvme_fabrics 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 172165 ']' 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 172165 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 172165 ']' 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 172165 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.676 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172165 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172165' 00:09:34.936 killing process with pid 172165 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 172165 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 172165 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:34.936 00:09:34.936 real 0m19.065s 00:09:34.936 user 0m24.329s 00:09:34.936 sys 0m6.292s 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.936 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.936 ************************************ 00:09:34.936 END TEST nvmf_queue_depth 00:09:34.936 ************************************ 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.196 ************************************ 00:09:35.196 START TEST nvmf_target_multipath 00:09:35.196 ************************************ 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:35.196 * Looking for test storage... 00:09:35.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.196 04:26:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.196 --rc genhtml_branch_coverage=1 00:09:35.196 --rc genhtml_function_coverage=1 00:09:35.196 --rc genhtml_legend=1 00:09:35.196 --rc geninfo_all_blocks=1 00:09:35.196 --rc geninfo_unexecuted_blocks=1 00:09:35.196 00:09:35.196 ' 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.196 --rc genhtml_branch_coverage=1 00:09:35.196 --rc genhtml_function_coverage=1 00:09:35.196 --rc genhtml_legend=1 00:09:35.196 --rc geninfo_all_blocks=1 00:09:35.196 --rc geninfo_unexecuted_blocks=1 00:09:35.196 00:09:35.196 ' 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.196 --rc genhtml_branch_coverage=1 00:09:35.196 --rc genhtml_function_coverage=1 00:09:35.196 --rc genhtml_legend=1 00:09:35.196 --rc geninfo_all_blocks=1 00:09:35.196 --rc geninfo_unexecuted_blocks=1 00:09:35.196 00:09:35.196 ' 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.196 --rc genhtml_branch_coverage=1 00:09:35.196 --rc genhtml_function_coverage=1 00:09:35.196 --rc genhtml_legend=1 00:09:35.196 --rc geninfo_all_blocks=1 00:09:35.196 --rc geninfo_unexecuted_blocks=1 00:09:35.196 00:09:35.196 ' 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.196 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.456 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.456 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:35.456 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:35.456 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.457 04:26:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.593 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:43.594 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:43.594 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:43.594 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:43.594 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:43.594 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:43.594 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:43.594 altname enp217s0f0np0 00:09:43.594 altname ens818f0np0 00:09:43.594 inet 192.168.100.8/24 scope global mlx_0_0 00:09:43.594 valid_lft forever preferred_lft forever 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:43.594 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:43.594 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:43.594 altname enp217s0f1np1 00:09:43.594 altname ens818f1np1 00:09:43.594 inet 192.168.100.9/24 scope global mlx_0_1 00:09:43.594 valid_lft forever preferred_lft forever 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:43.594 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:43.595 192.168.100.9' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:43.595 192.168.100.9' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:43.595 192.168.100.9' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:43.595 run this test only with TCP transport for now 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:43.595 rmmod nvme_rdma 00:09:43.595 rmmod nvme_fabrics 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:43.595 00:09:43.595 real 0m7.684s 00:09:43.595 user 0m2.233s 00:09:43.595 sys 0m5.636s 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:43.595 ************************************ 00:09:43.595 END TEST nvmf_target_multipath 00:09:43.595 ************************************ 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.595 ************************************ 00:09:43.595 START TEST nvmf_zcopy 00:09:43.595 ************************************ 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:43.595 * Looking for test storage... 00:09:43.595 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.595 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.596 --rc genhtml_branch_coverage=1 00:09:43.596 --rc genhtml_function_coverage=1 00:09:43.596 --rc genhtml_legend=1 00:09:43.596 --rc geninfo_all_blocks=1 00:09:43.596 --rc geninfo_unexecuted_blocks=1 00:09:43.596 00:09:43.596 ' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.596 --rc genhtml_branch_coverage=1 00:09:43.596 --rc genhtml_function_coverage=1 00:09:43.596 --rc genhtml_legend=1 00:09:43.596 --rc geninfo_all_blocks=1 00:09:43.596 --rc geninfo_unexecuted_blocks=1 00:09:43.596 00:09:43.596 ' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.596 --rc genhtml_branch_coverage=1 00:09:43.596 --rc genhtml_function_coverage=1 00:09:43.596 --rc genhtml_legend=1 00:09:43.596 --rc geninfo_all_blocks=1 00:09:43.596 --rc geninfo_unexecuted_blocks=1 00:09:43.596 00:09:43.596 ' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.596 --rc genhtml_branch_coverage=1 00:09:43.596 --rc genhtml_function_coverage=1 00:09:43.596 --rc genhtml_legend=1 00:09:43.596 --rc geninfo_all_blocks=1 00:09:43.596 --rc geninfo_unexecuted_blocks=1 00:09:43.596 00:09:43.596 ' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.596 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.596 04:26:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:50.254 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:50.254 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:50.254 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:50.254 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.254 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:50.255 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.255 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:50.255 altname enp217s0f0np0 00:09:50.255 altname ens818f0np0 00:09:50.255 inet 192.168.100.8/24 scope global mlx_0_0 00:09:50.255 valid_lft forever preferred_lft forever 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:50.255 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.255 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:50.255 altname enp217s0f1np1 00:09:50.255 altname ens818f1np1 00:09:50.255 inet 192.168.100.9/24 scope global mlx_0_1 00:09:50.255 valid_lft forever preferred_lft forever 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:50.255 04:26:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:50.255 192.168.100.9' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:50.255 192.168.100.9' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:50.255 192.168.100.9' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:50.255 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=180995 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 180995 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 180995 ']' 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.574 [2024-11-17 04:26:29.141760] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.574 [2024-11-17 04:26:29.141809] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.574 [2024-11-17 04:26:29.231004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.574 [2024-11-17 04:26:29.251541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.574 [2024-11-17 04:26:29.251577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.574 [2024-11-17 04:26:29.251586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.574 [2024-11-17 04:26:29.251595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.574 [2024-11-17 04:26:29.251602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.574 [2024-11-17 04:26:29.252200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:50.574 Unsupported transport: rdma 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:09:50.574 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:50.859 nvmf_trace.0 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:50.859 rmmod nvme_rdma 00:09:50.859 rmmod nvme_fabrics 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 180995 ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 180995 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 180995 ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 180995 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 180995 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 180995' 00:09:50.859 killing process with pid 180995 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 180995 00:09:50.859 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 180995 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:51.125 00:09:51.125 real 0m8.108s 00:09:51.125 user 0m2.851s 00:09:51.125 sys 0m5.884s 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.125 ************************************ 00:09:51.125 END TEST nvmf_zcopy 00:09:51.125 ************************************ 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.125 ************************************ 00:09:51.125 START TEST nvmf_nmic 00:09:51.125 ************************************ 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:51.125 * Looking for test storage... 00:09:51.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.125 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.398 --rc genhtml_branch_coverage=1 00:09:51.398 --rc genhtml_function_coverage=1 00:09:51.398 --rc genhtml_legend=1 00:09:51.398 --rc geninfo_all_blocks=1 00:09:51.398 --rc geninfo_unexecuted_blocks=1 00:09:51.398 00:09:51.398 ' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.398 --rc genhtml_branch_coverage=1 00:09:51.398 --rc genhtml_function_coverage=1 00:09:51.398 --rc genhtml_legend=1 00:09:51.398 --rc geninfo_all_blocks=1 00:09:51.398 --rc geninfo_unexecuted_blocks=1 00:09:51.398 00:09:51.398 ' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.398 --rc genhtml_branch_coverage=1 00:09:51.398 --rc genhtml_function_coverage=1 00:09:51.398 --rc genhtml_legend=1 00:09:51.398 --rc geninfo_all_blocks=1 00:09:51.398 --rc geninfo_unexecuted_blocks=1 00:09:51.398 00:09:51.398 ' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.398 --rc genhtml_branch_coverage=1 00:09:51.398 --rc genhtml_function_coverage=1 00:09:51.398 --rc genhtml_legend=1 00:09:51.398 --rc geninfo_all_blocks=1 00:09:51.398 --rc geninfo_unexecuted_blocks=1 00:09:51.398 00:09:51.398 ' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.398 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.399 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.399 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.399 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.399 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.399 04:26:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.399 04:26:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:59.731 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:59.731 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:59.731 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.731 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:59.732 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:59.732 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.732 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:59.732 altname enp217s0f0np0 00:09:59.732 altname ens818f0np0 00:09:59.732 inet 192.168.100.8/24 scope global mlx_0_0 00:09:59.732 valid_lft forever preferred_lft forever 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:59.732 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.732 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:59.732 altname enp217s0f1np1 00:09:59.732 altname ens818f1np1 00:09:59.732 inet 192.168.100.9/24 scope global mlx_0_1 00:09:59.732 valid_lft forever preferred_lft forever 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:59.732 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:59.733 192.168.100.9' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:59.733 192.168.100.9' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:59.733 192.168.100.9' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=184527 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 184527 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 184527 ']' 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 [2024-11-17 04:26:37.341451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:59.733 [2024-11-17 04:26:37.341499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.733 [2024-11-17 04:26:37.431803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.733 [2024-11-17 04:26:37.455350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.733 [2024-11-17 04:26:37.455390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.733 [2024-11-17 04:26:37.455399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.733 [2024-11-17 04:26:37.455408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.733 [2024-11-17 04:26:37.455431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.733 [2024-11-17 04:26:37.457180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.733 [2024-11-17 04:26:37.457290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.733 [2024-11-17 04:26:37.457401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.733 [2024-11-17 04:26:37.457402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 [2024-11-17 04:26:37.622168] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1891c50/0x1896100) succeed. 00:09:59.733 [2024-11-17 04:26:37.631458] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1893290/0x18d77a0) succeed. 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 Malloc0 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 [2024-11-17 04:26:37.820846] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.733 test case1: single bdev can't be used in multiple subsystems 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.733 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 [2024-11-17 04:26:37.848634] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.734 [2024-11-17 04:26:37.848656] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.734 [2024-11-17 04:26:37.848665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.734 request: 00:09:59.734 { 00:09:59.734 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.734 "namespace": { 00:09:59.734 "bdev_name": "Malloc0", 00:09:59.734 "no_auto_visible": false 00:09:59.734 }, 00:09:59.734 "method": "nvmf_subsystem_add_ns", 00:09:59.734 "req_id": 1 00:09:59.734 } 00:09:59.734 Got JSON-RPC error response 00:09:59.734 response: 00:09:59.734 { 00:09:59.734 "code": -32602, 00:09:59.734 "message": "Invalid parameters" 00:09:59.734 } 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.734 Adding namespace failed - expected result. 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.734 test case2: host connect to nvmf target in multiple paths 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 [2024-11-17 04:26:37.864719] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 04:26:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:00.348 04:26:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:01.362 04:26:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.362 04:26:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:01.362 04:26:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.362 04:26:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:01.362 04:26:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:03.385 04:26:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:03.385 [global] 00:10:03.385 thread=1 00:10:03.385 invalidate=1 00:10:03.385 rw=write 00:10:03.385 time_based=1 00:10:03.385 runtime=1 00:10:03.385 ioengine=libaio 00:10:03.385 direct=1 00:10:03.385 bs=4096 00:10:03.385 iodepth=1 00:10:03.385 norandommap=0 00:10:03.385 numjobs=1 00:10:03.385 00:10:03.385 verify_dump=1 00:10:03.385 verify_backlog=512 00:10:03.385 verify_state_save=0 00:10:03.385 do_verify=1 00:10:03.385 verify=crc32c-intel 00:10:03.385 [job0] 00:10:03.385 filename=/dev/nvme0n1 00:10:03.385 Could not set queue depth (nvme0n1) 00:10:03.684 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.684 fio-3.35 00:10:03.684 Starting 1 thread 00:10:05.122 00:10:05.122 job0: (groupid=0, jobs=1): err= 0: pid=185518: Sun Nov 17 04:26:43 2024 00:10:05.122 read: IOPS=7101, BW=27.7MiB/s (29.1MB/s)(27.8MiB/1001msec) 00:10:05.122 slat (nsec): min=3204, max=36068, avg=8770.38, stdev=1018.01 00:10:05.122 clat (usec): min=41, max=160, avg=58.44, stdev= 3.91 00:10:05.122 lat (usec): min=48, max=163, avg=67.22, stdev= 4.01 00:10:05.122 clat percentiles (usec): 00:10:05.122 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:10:05.122 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:10:05.122 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 65], 00:10:05.122 | 99.00th=[ 69], 99.50th=[ 70], 99.90th=[ 74], 99.95th=[ 79], 00:10:05.122 | 99.99th=[ 161] 00:10:05.122 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:10:05.122 slat (nsec): min=10599, max=49353, avg=11279.29, stdev=1182.08 00:10:05.122 clat (usec): min=32, max=111, avg=56.25, stdev= 3.88 00:10:05.122 lat (usec): min=57, max=161, avg=67.53, stdev= 4.07 00:10:05.122 clat percentiles (usec): 00:10:05.122 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 53], 00:10:05.123 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:10:05.123 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:10:05.123 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 73], 99.95th=[ 80], 00:10:05.123 | 99.99th=[ 113] 00:10:05.123 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:10:05.123 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:10:05.123 lat (usec) : 50=2.41%, 100=97.58%, 250=0.01% 00:10:05.123 cpu : usr=11.20%, sys=18.70%, ctx=14277, majf=0, minf=1 00:10:05.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.123 issued rwts: total=7109,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.123 00:10:05.123 Run status group 0 (all jobs): 00:10:05.123 READ: bw=27.7MiB/s (29.1MB/s), 27.7MiB/s-27.7MiB/s (29.1MB/s-29.1MB/s), io=27.8MiB (29.1MB), run=1001-1001msec 00:10:05.123 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:05.123 00:10:05.123 Disk stats (read/write): 00:10:05.123 nvme0n1: ios=6253/6656, merge=0/0, ticks=322/323, in_queue=645, util=90.58% 00:10:05.123 04:26:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:07.103 rmmod nvme_rdma 00:10:07.103 rmmod nvme_fabrics 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 184527 ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 184527 ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184527' 00:10:07.103 killing process with pid 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 184527 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:07.103 00:10:07.103 real 0m16.124s 00:10:07.103 user 0m44.038s 00:10:07.103 sys 0m6.690s 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.103 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.103 ************************************ 00:10:07.103 END TEST nvmf_nmic 00:10:07.103 ************************************ 00:10:07.363 04:26:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:07.363 04:26:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.363 04:26:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.363 04:26:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 ************************************ 00:10:07.363 START TEST nvmf_fio_target 00:10:07.363 ************************************ 00:10:07.363 04:26:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:07.363 * Looking for test storage... 00:10:07.363 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.363 --rc genhtml_branch_coverage=1 00:10:07.363 --rc genhtml_function_coverage=1 00:10:07.363 --rc genhtml_legend=1 00:10:07.363 --rc geninfo_all_blocks=1 00:10:07.363 --rc geninfo_unexecuted_blocks=1 00:10:07.363 00:10:07.363 ' 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.363 --rc genhtml_branch_coverage=1 00:10:07.363 --rc genhtml_function_coverage=1 00:10:07.363 --rc genhtml_legend=1 00:10:07.363 --rc geninfo_all_blocks=1 00:10:07.363 --rc geninfo_unexecuted_blocks=1 00:10:07.363 00:10:07.363 ' 00:10:07.363 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.363 --rc genhtml_branch_coverage=1 00:10:07.363 --rc genhtml_function_coverage=1 00:10:07.363 --rc genhtml_legend=1 00:10:07.363 --rc geninfo_all_blocks=1 00:10:07.364 --rc geninfo_unexecuted_blocks=1 00:10:07.364 00:10:07.364 ' 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.364 --rc genhtml_branch_coverage=1 00:10:07.364 --rc genhtml_function_coverage=1 00:10:07.364 --rc genhtml_legend=1 00:10:07.364 --rc geninfo_all_blocks=1 00:10:07.364 --rc geninfo_unexecuted_blocks=1 00:10:07.364 00:10:07.364 ' 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.364 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.625 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.625 04:26:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:15.762 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:15.762 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:15.762 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:15.762 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.762 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:15.763 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:15.763 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:15.763 altname enp217s0f0np0 00:10:15.763 altname ens818f0np0 00:10:15.763 inet 192.168.100.8/24 scope global mlx_0_0 00:10:15.763 valid_lft forever preferred_lft forever 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:15.763 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:15.763 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:15.763 altname enp217s0f1np1 00:10:15.763 altname ens818f1np1 00:10:15.763 inet 192.168.100.9/24 scope global mlx_0_1 00:10:15.763 valid_lft forever preferred_lft forever 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:15.763 192.168.100.9' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:15.763 192.168.100.9' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:15.763 192.168.100.9' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:15.763 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=189513 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 189513 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 189513 ']' 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.764 [2024-11-17 04:26:53.531196] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:15.764 [2024-11-17 04:26:53.531247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.764 [2024-11-17 04:26:53.622766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.764 [2024-11-17 04:26:53.645393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.764 [2024-11-17 04:26:53.645435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.764 [2024-11-17 04:26:53.645445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.764 [2024-11-17 04:26:53.645453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.764 [2024-11-17 04:26:53.645460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.764 [2024-11-17 04:26:53.649951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.764 [2024-11-17 04:26:53.649993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.764 [2024-11-17 04:26:53.650099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.764 [2024-11-17 04:26:53.650100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.764 04:26:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:15.764 [2024-11-17 04:26:53.977548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7fec50/0x803100) succeed. 00:10:15.764 [2024-11-17 04:26:53.986663] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x800290/0x8447a0) succeed. 00:10:15.764 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.764 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:15.764 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.764 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:15.764 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.024 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:16.024 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.284 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:16.284 04:26:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:16.544 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.804 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:16.804 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.804 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:16.804 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.064 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:17.064 04:26:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:17.325 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.585 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:17.585 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.845 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:17.845 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:17.845 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.105 [2024-11-17 04:26:56.820636] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.105 04:26:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:18.366 04:26:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:18.626 04:26:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:19.566 04:26:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:21.476 04:27:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:21.476 [global] 00:10:21.476 thread=1 00:10:21.476 invalidate=1 00:10:21.476 rw=write 00:10:21.476 time_based=1 00:10:21.476 runtime=1 00:10:21.476 ioengine=libaio 00:10:21.476 direct=1 00:10:21.476 bs=4096 00:10:21.476 iodepth=1 00:10:21.476 norandommap=0 00:10:21.476 numjobs=1 00:10:21.476 00:10:21.476 verify_dump=1 00:10:21.476 verify_backlog=512 00:10:21.476 verify_state_save=0 00:10:21.476 do_verify=1 00:10:21.476 verify=crc32c-intel 00:10:21.476 [job0] 00:10:21.476 filename=/dev/nvme0n1 00:10:21.476 [job1] 00:10:21.476 filename=/dev/nvme0n2 00:10:21.476 [job2] 00:10:21.476 filename=/dev/nvme0n3 00:10:21.476 [job3] 00:10:21.476 filename=/dev/nvme0n4 00:10:21.783 Could not set queue depth (nvme0n1) 00:10:21.783 Could not set queue depth (nvme0n2) 00:10:21.783 Could not set queue depth (nvme0n3) 00:10:21.783 Could not set queue depth (nvme0n4) 00:10:22.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.051 fio-3.35 00:10:22.051 Starting 4 threads 00:10:23.434 00:10:23.434 job0: (groupid=0, jobs=1): err= 0: pid=190995: Sun Nov 17 04:27:01 2024 00:10:23.434 read: IOPS=3380, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:10:23.434 slat (nsec): min=8325, max=49738, avg=9550.29, stdev=2103.84 00:10:23.434 clat (usec): min=62, max=220, avg=131.89, stdev=27.76 00:10:23.434 lat (usec): min=78, max=229, avg=141.44, stdev=27.79 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 74], 5.00th=[ 79], 10.00th=[ 84], 20.00th=[ 114], 00:10:23.434 | 30.00th=[ 127], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:23.434 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 00:10:23.434 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 210], 00:10:23.434 | 99.99th=[ 221] 00:10:23.434 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:23.434 slat (nsec): min=9071, max=69263, avg=11685.44, stdev=2162.52 00:10:23.434 clat (usec): min=56, max=212, avg=129.00, stdev=28.20 00:10:23.434 lat (usec): min=75, max=225, avg=140.69, stdev=28.22 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 71], 5.00th=[ 76], 10.00th=[ 81], 20.00th=[ 110], 00:10:23.434 | 30.00th=[ 118], 40.00th=[ 127], 50.00th=[ 137], 60.00th=[ 141], 00:10:23.434 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 169], 00:10:23.434 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 208], 00:10:23.434 | 99.99th=[ 212] 00:10:23.434 bw ( KiB/s): min=16384, max=16384, per=26.18%, avg=16384.00, stdev= 0.00, samples=1 00:10:23.434 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:23.434 lat (usec) : 100=17.85%, 250=82.15% 00:10:23.434 cpu : usr=5.70%, sys=9.30%, ctx=6969, majf=0, minf=1 00:10:23.434 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.434 issued rwts: total=3384,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.434 job1: (groupid=0, jobs=1): err= 0: pid=191016: Sun Nov 17 04:27:01 2024 00:10:23.434 read: IOPS=3640, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1001msec) 00:10:23.434 slat (nsec): min=8300, max=34544, avg=9148.57, stdev=901.88 00:10:23.434 clat (usec): min=63, max=210, avg=119.30, stdev=33.40 00:10:23.434 lat (usec): min=72, max=219, avg=128.44, stdev=33.54 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 79], 00:10:23.434 | 30.00th=[ 84], 40.00th=[ 118], 50.00th=[ 133], 60.00th=[ 139], 00:10:23.434 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:10:23.434 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 204], 00:10:23.434 | 99.99th=[ 210] 00:10:23.434 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:23.434 slat (nsec): min=10169, max=45971, avg=11524.17, stdev=1456.57 00:10:23.434 clat (usec): min=59, max=205, avg=113.82, stdev=32.64 00:10:23.434 lat (usec): min=70, max=217, avg=125.35, stdev=32.72 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 77], 00:10:23.434 | 30.00th=[ 82], 40.00th=[ 109], 50.00th=[ 119], 60.00th=[ 131], 00:10:23.434 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 163], 00:10:23.434 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 198], 00:10:23.434 | 99.99th=[ 206] 00:10:23.434 bw ( KiB/s): min=19240, max=19240, per=30.75%, avg=19240.00, stdev= 0.00, samples=1 00:10:23.434 iops : min= 4810, max= 4810, avg=4810.00, stdev= 0.00, samples=1 00:10:23.434 lat (usec) : 100=37.36%, 250=62.64% 00:10:23.434 cpu : usr=5.80%, sys=10.80%, ctx=7740, majf=0, minf=1 00:10:23.434 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.434 issued rwts: total=3644,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.434 job2: (groupid=0, jobs=1): err= 0: pid=191037: Sun Nov 17 04:27:01 2024 00:10:23.434 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:23.434 slat (nsec): min=8300, max=29090, avg=9127.67, stdev=985.64 00:10:23.434 clat (usec): min=68, max=205, avg=97.40, stdev=23.27 00:10:23.434 lat (usec): min=80, max=214, avg=106.53, stdev=23.48 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 84], 00:10:23.434 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:10:23.434 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 139], 95.00th=[ 151], 00:10:23.434 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 196], 99.95th=[ 202], 00:10:23.434 | 99.99th=[ 206] 00:10:23.434 write: IOPS=4622, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1001msec); 0 zone resets 00:10:23.434 slat (nsec): min=10354, max=46265, avg=11910.34, stdev=2341.44 00:10:23.434 clat (usec): min=68, max=216, avg=92.90, stdev=22.27 00:10:23.434 lat (usec): min=81, max=245, avg=104.81, stdev=23.20 00:10:23.434 clat percentiles (usec): 00:10:23.434 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:10:23.434 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:10:23.435 | 70.00th=[ 90], 80.00th=[ 111], 90.00th=[ 124], 95.00th=[ 143], 00:10:23.435 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 198], 00:10:23.435 | 99.99th=[ 217] 00:10:23.435 bw ( KiB/s): min=16384, max=16384, per=26.18%, avg=16384.00, stdev= 0.00, samples=1 00:10:23.435 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:23.435 lat (usec) : 100=78.28%, 250=21.72% 00:10:23.435 cpu : usr=8.70%, sys=11.10%, ctx=9235, majf=0, minf=1 00:10:23.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.435 issued rwts: total=4608,4627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.435 job3: (groupid=0, jobs=1): err= 0: pid=191044: Sun Nov 17 04:27:01 2024 00:10:23.435 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:23.435 slat (nsec): min=8556, max=39957, avg=11769.00, stdev=4173.92 00:10:23.435 clat (usec): min=70, max=220, avg=144.52, stdev=21.64 00:10:23.435 lat (usec): min=88, max=240, avg=156.29, stdev=23.10 00:10:23.435 clat percentiles (usec): 00:10:23.435 | 1.00th=[ 89], 5.00th=[ 114], 10.00th=[ 122], 20.00th=[ 129], 00:10:23.435 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:10:23.435 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 188], 00:10:23.435 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 212], 99.95th=[ 219], 00:10:23.435 | 99.99th=[ 221] 00:10:23.435 write: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:10:23.435 slat (nsec): min=10698, max=53911, avg=13537.14, stdev=3681.38 00:10:23.435 clat (usec): min=70, max=214, avg=136.14, stdev=23.44 00:10:23.435 lat (usec): min=83, max=226, avg=149.67, stdev=24.17 00:10:23.435 clat percentiles (usec): 00:10:23.435 | 1.00th=[ 83], 5.00th=[ 102], 10.00th=[ 110], 20.00th=[ 117], 00:10:23.435 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 141], 00:10:23.435 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 172], 95.00th=[ 182], 00:10:23.435 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 200], 99.95th=[ 210], 00:10:23.435 | 99.99th=[ 215] 00:10:23.435 bw ( KiB/s): min=14424, max=14424, per=23.05%, avg=14424.00, stdev= 0.00, samples=1 00:10:23.435 iops : min= 3606, max= 3606, avg=3606.00, stdev= 0.00, samples=1 00:10:23.435 lat (usec) : 100=3.63%, 250=96.37% 00:10:23.435 cpu : usr=5.20%, sys=9.20%, ctx=6425, majf=0, minf=1 00:10:23.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.435 issued rwts: total=3072,3353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.435 00:10:23.435 Run status group 0 (all jobs): 00:10:23.435 READ: bw=57.4MiB/s (60.2MB/s), 12.0MiB/s-18.0MiB/s (12.6MB/s-18.9MB/s), io=57.5MiB (60.2MB), run=1001-1001msec 00:10:23.435 WRITE: bw=61.1MiB/s (64.1MB/s), 13.1MiB/s-18.1MiB/s (13.7MB/s-18.9MB/s), io=61.2MiB (64.1MB), run=1001-1001msec 00:10:23.435 00:10:23.435 Disk stats (read/write): 00:10:23.435 nvme0n1: ios=2820/3072, merge=0/0, ticks=360/356, in_queue=716, util=83.95% 00:10:23.435 nvme0n2: ios=3072/3541, merge=0/0, ticks=333/354, in_queue=687, util=84.94% 00:10:23.435 nvme0n3: ios=3584/3885, merge=0/0, ticks=337/331, in_queue=668, util=88.38% 00:10:23.435 nvme0n4: ios=2560/2811, merge=0/0, ticks=348/352, in_queue=700, util=89.43% 00:10:23.435 04:27:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:23.435 [global] 00:10:23.435 thread=1 00:10:23.435 invalidate=1 00:10:23.435 rw=randwrite 00:10:23.435 time_based=1 00:10:23.435 runtime=1 00:10:23.435 ioengine=libaio 00:10:23.435 direct=1 00:10:23.435 bs=4096 00:10:23.435 iodepth=1 00:10:23.435 norandommap=0 00:10:23.435 numjobs=1 00:10:23.435 00:10:23.435 verify_dump=1 00:10:23.435 verify_backlog=512 00:10:23.435 verify_state_save=0 00:10:23.435 do_verify=1 00:10:23.435 verify=crc32c-intel 00:10:23.435 [job0] 00:10:23.435 filename=/dev/nvme0n1 00:10:23.435 [job1] 00:10:23.435 filename=/dev/nvme0n2 00:10:23.435 [job2] 00:10:23.435 filename=/dev/nvme0n3 00:10:23.435 [job3] 00:10:23.435 filename=/dev/nvme0n4 00:10:23.435 Could not set queue depth (nvme0n1) 00:10:23.435 Could not set queue depth (nvme0n2) 00:10:23.435 Could not set queue depth (nvme0n3) 00:10:23.435 Could not set queue depth (nvme0n4) 00:10:23.695 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.695 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.695 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.695 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.695 fio-3.35 00:10:23.695 Starting 4 threads 00:10:25.080 00:10:25.080 job0: (groupid=0, jobs=1): err= 0: pid=191525: Sun Nov 17 04:27:03 2024 00:10:25.080 read: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:10:25.080 slat (nsec): min=8284, max=26299, avg=9247.49, stdev=887.06 00:10:25.080 clat (usec): min=69, max=210, avg=140.04, stdev=16.38 00:10:25.080 lat (usec): min=78, max=218, avg=149.29, stdev=16.38 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 80], 5.00th=[ 114], 10.00th=[ 129], 20.00th=[ 135], 00:10:25.080 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:10:25.080 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 167], 00:10:25.080 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 206], 99.95th=[ 210], 00:10:25.080 | 99.99th=[ 210] 00:10:25.080 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:25.080 slat (nsec): min=10256, max=54741, avg=11413.23, stdev=1268.68 00:10:25.080 clat (usec): min=63, max=199, avg=129.93, stdev=16.82 00:10:25.080 lat (usec): min=74, max=210, avg=141.35, stdev=16.90 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 75], 5.00th=[ 98], 10.00th=[ 112], 20.00th=[ 123], 00:10:25.080 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:10:25.080 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 157], 00:10:25.080 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 196], 00:10:25.080 | 99.99th=[ 200] 00:10:25.080 bw ( KiB/s): min=15144, max=15144, per=25.59%, avg=15144.00, stdev= 0.00, samples=1 00:10:25.080 iops : min= 3786, max= 3786, avg=3786.00, stdev= 0.00, samples=1 00:10:25.080 lat (usec) : 100=4.83%, 250=95.17% 00:10:25.080 cpu : usr=5.90%, sys=8.80%, ctx=6771, majf=0, minf=1 00:10:25.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 issued rwts: total=3187,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.080 job1: (groupid=0, jobs=1): err= 0: pid=191540: Sun Nov 17 04:27:03 2024 00:10:25.080 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:25.080 slat (nsec): min=8329, max=40403, avg=9741.91, stdev=2122.19 00:10:25.080 clat (usec): min=71, max=226, avg=143.12, stdev=16.26 00:10:25.080 lat (usec): min=84, max=241, avg=152.87, stdev=16.58 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 89], 5.00th=[ 123], 10.00th=[ 133], 20.00th=[ 137], 00:10:25.080 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:10:25.080 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 178], 00:10:25.080 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 217], 99.95th=[ 219], 00:10:25.080 | 99.99th=[ 227] 00:10:25.080 write: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1001msec); 0 zone resets 00:10:25.080 slat (nsec): min=10004, max=52392, avg=11798.19, stdev=2464.59 00:10:25.080 clat (usec): min=56, max=203, avg=133.02, stdev=17.39 00:10:25.080 lat (usec): min=76, max=216, avg=144.82, stdev=17.78 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 81], 5.00th=[ 105], 10.00th=[ 115], 20.00th=[ 126], 00:10:25.080 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:10:25.080 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 153], 95.00th=[ 165], 00:10:25.080 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 200], 00:10:25.080 | 99.99th=[ 204] 00:10:25.080 bw ( KiB/s): min=15192, max=15192, per=25.67%, avg=15192.00, stdev= 0.00, samples=1 00:10:25.080 iops : min= 3798, max= 3798, avg=3798.00, stdev= 0.00, samples=1 00:10:25.080 lat (usec) : 100=2.87%, 250=97.13% 00:10:25.080 cpu : usr=4.50%, sys=9.90%, ctx=6620, majf=0, minf=1 00:10:25.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 issued rwts: total=3072,3548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.080 job2: (groupid=0, jobs=1): err= 0: pid=191562: Sun Nov 17 04:27:03 2024 00:10:25.080 read: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:10:25.080 slat (nsec): min=8520, max=33007, avg=9480.27, stdev=1467.38 00:10:25.080 clat (usec): min=75, max=212, avg=141.63, stdev=14.39 00:10:25.080 lat (usec): min=84, max=227, avg=151.11, stdev=14.57 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 87], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 137], 00:10:25.080 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:10:25.080 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 165], 00:10:25.080 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 208], 00:10:25.080 | 99.99th=[ 212] 00:10:25.080 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:25.080 slat (nsec): min=10466, max=52512, avg=11747.93, stdev=2687.47 00:10:25.080 clat (usec): min=70, max=193, avg=130.21, stdev=19.09 00:10:25.080 lat (usec): min=81, max=209, avg=141.96, stdev=19.66 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 93], 20.00th=[ 127], 00:10:25.080 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:10:25.080 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 157], 00:10:25.080 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 190], 00:10:25.080 | 99.99th=[ 194] 00:10:25.080 bw ( KiB/s): min=15160, max=15160, per=25.61%, avg=15160.00, stdev= 0.00, samples=1 00:10:25.080 iops : min= 3790, max= 3790, avg=3790.00, stdev= 0.00, samples=1 00:10:25.080 lat (usec) : 100=7.34%, 250=92.66% 00:10:25.080 cpu : usr=6.00%, sys=8.40%, ctx=6731, majf=0, minf=1 00:10:25.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.080 issued rwts: total=3147,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.080 job3: (groupid=0, jobs=1): err= 0: pid=191569: Sun Nov 17 04:27:03 2024 00:10:25.080 read: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1001msec) 00:10:25.080 slat (nsec): min=8514, max=21878, avg=9145.63, stdev=868.00 00:10:25.080 clat (usec): min=73, max=224, avg=114.60, stdev=28.11 00:10:25.080 lat (usec): min=82, max=233, avg=123.75, stdev=28.28 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:10:25.080 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 135], 60.00th=[ 139], 00:10:25.080 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 147], 00:10:25.080 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 208], 99.95th=[ 208], 00:10:25.080 | 99.99th=[ 225] 00:10:25.080 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:25.080 slat (nsec): min=10446, max=54955, avg=11260.88, stdev=1310.64 00:10:25.080 clat (usec): min=68, max=180, avg=109.30, stdev=25.80 00:10:25.080 lat (usec): min=80, max=191, avg=120.57, stdev=25.77 00:10:25.080 clat percentiles (usec): 00:10:25.080 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 82], 00:10:25.080 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 125], 60.00th=[ 131], 00:10:25.080 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 141], 00:10:25.080 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 176], 99.95th=[ 176], 00:10:25.080 | 99.99th=[ 182] 00:10:25.080 bw ( KiB/s): min=15008, max=15008, per=25.36%, avg=15008.00, stdev= 0.00, samples=1 00:10:25.080 iops : min= 3752, max= 3752, avg=3752.00, stdev= 0.00, samples=1 00:10:25.080 lat (usec) : 100=47.65%, 250=52.35% 00:10:25.080 cpu : usr=6.00%, sys=11.10%, ctx=8027, majf=0, minf=1 00:10:25.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.081 issued rwts: total=3931,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.081 00:10:25.081 Run status group 0 (all jobs): 00:10:25.081 READ: bw=52.0MiB/s (54.6MB/s), 12.0MiB/s-15.3MiB/s (12.6MB/s-16.1MB/s), io=52.1MiB (54.6MB), run=1001-1001msec 00:10:25.081 WRITE: bw=57.8MiB/s (60.6MB/s), 13.8MiB/s-16.0MiB/s (14.5MB/s-16.8MB/s), io=57.9MiB (60.7MB), run=1001-1001msec 00:10:25.081 00:10:25.081 Disk stats (read/write): 00:10:25.081 nvme0n1: ios=2644/3072, merge=0/0, ticks=358/365, in_queue=723, util=84.27% 00:10:25.081 nvme0n2: ios=2560/3019, merge=0/0, ticks=335/364, in_queue=699, util=85.12% 00:10:25.081 nvme0n3: ios=2599/3072, merge=0/0, ticks=349/365, in_queue=714, util=88.36% 00:10:25.081 nvme0n4: ios=3072/3249, merge=0/0, ticks=349/346, in_queue=695, util=89.51% 00:10:25.081 04:27:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:25.081 [global] 00:10:25.081 thread=1 00:10:25.081 invalidate=1 00:10:25.081 rw=write 00:10:25.081 time_based=1 00:10:25.081 runtime=1 00:10:25.081 ioengine=libaio 00:10:25.081 direct=1 00:10:25.081 bs=4096 00:10:25.081 iodepth=128 00:10:25.081 norandommap=0 00:10:25.081 numjobs=1 00:10:25.081 00:10:25.081 verify_dump=1 00:10:25.081 verify_backlog=512 00:10:25.081 verify_state_save=0 00:10:25.081 do_verify=1 00:10:25.081 verify=crc32c-intel 00:10:25.081 [job0] 00:10:25.081 filename=/dev/nvme0n1 00:10:25.081 [job1] 00:10:25.081 filename=/dev/nvme0n2 00:10:25.081 [job2] 00:10:25.081 filename=/dev/nvme0n3 00:10:25.081 [job3] 00:10:25.081 filename=/dev/nvme0n4 00:10:25.081 Could not set queue depth (nvme0n1) 00:10:25.081 Could not set queue depth (nvme0n2) 00:10:25.081 Could not set queue depth (nvme0n3) 00:10:25.081 Could not set queue depth (nvme0n4) 00:10:25.081 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.081 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.081 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.081 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.081 fio-3.35 00:10:25.081 Starting 4 threads 00:10:26.464 00:10:26.464 job0: (groupid=0, jobs=1): err= 0: pid=191963: Sun Nov 17 04:27:05 2024 00:10:26.464 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:26.464 slat (usec): min=2, max=5544, avg=79.18, stdev=298.50 00:10:26.464 clat (usec): min=4807, max=27487, avg=10311.67, stdev=5892.39 00:10:26.464 lat (usec): min=4814, max=28032, avg=10390.84, stdev=5933.06 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 6194], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:10:26.464 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7439], 00:10:26.464 | 70.00th=[ 7635], 80.00th=[15533], 90.00th=[21890], 95.00th=[22938], 00:10:26.464 | 99.00th=[24511], 99.50th=[26346], 99.90th=[27395], 99.95th=[27395], 00:10:26.464 | 99.99th=[27395] 00:10:26.464 write: IOPS=6414, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1005msec); 0 zone resets 00:10:26.464 slat (usec): min=2, max=4357, avg=76.68, stdev=277.56 00:10:26.464 clat (usec): min=4092, max=26316, avg=9878.09, stdev=5011.27 00:10:26.464 lat (usec): min=4833, max=26320, avg=9954.76, stdev=5045.36 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 5932], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6390], 00:10:26.464 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7373], 00:10:26.464 | 70.00th=[11731], 80.00th=[12387], 90.00th=[19006], 95.00th=[21890], 00:10:26.464 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26346], 99.95th=[26346], 00:10:26.464 | 99.99th=[26346] 00:10:26.464 bw ( KiB/s): min=21888, max=28672, per=26.04%, avg=25280.00, stdev=4797.01, samples=2 00:10:26.464 iops : min= 5472, max= 7168, avg=6320.00, stdev=1199.25, samples=2 00:10:26.464 lat (msec) : 10=68.23%, 20=19.35%, 50=12.42% 00:10:26.464 cpu : usr=2.59%, sys=4.38%, ctx=1221, majf=0, minf=1 00:10:26.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:26.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.464 issued rwts: total=6144,6447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.464 job1: (groupid=0, jobs=1): err= 0: pid=191978: Sun Nov 17 04:27:05 2024 00:10:26.464 read: IOPS=7519, BW=29.4MiB/s (30.8MB/s)(29.5MiB/1005msec) 00:10:26.464 slat (usec): min=2, max=6329, avg=27.91, stdev=232.28 00:10:26.464 clat (usec): min=2886, max=22679, avg=8562.81, stdev=3203.36 00:10:26.464 lat (usec): min=2893, max=22687, avg=8590.72, stdev=3215.66 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 3654], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5932], 00:10:26.464 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7963], 60.00th=[ 8717], 00:10:26.464 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[12780], 95.00th=[15926], 00:10:26.464 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:26.464 | 99.99th=[22676] 00:10:26.464 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:10:26.464 slat (usec): min=2, max=7763, avg=31.91, stdev=269.83 00:10:26.464 clat (usec): min=1824, max=22344, avg=8241.84, stdev=3108.72 00:10:26.464 lat (usec): min=1852, max=22352, avg=8273.75, stdev=3122.72 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5735], 00:10:26.464 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:10:26.464 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12125], 95.00th=[14091], 00:10:26.464 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:10:26.464 | 99.99th=[22414] 00:10:26.464 bw ( KiB/s): min=28672, max=32768, per=31.65%, avg=30720.00, stdev=2896.31, samples=2 00:10:26.464 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:10:26.464 lat (msec) : 2=0.10%, 4=2.57%, 10=72.39%, 20=24.93%, 50=0.02% 00:10:26.464 cpu : usr=4.98%, sys=9.46%, ctx=1061, majf=0, minf=1 00:10:26.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:26.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.464 issued rwts: total=7557,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.464 job2: (groupid=0, jobs=1): err= 0: pid=191997: Sun Nov 17 04:27:05 2024 00:10:26.464 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:26.464 slat (usec): min=2, max=3932, avg=83.15, stdev=294.36 00:10:26.464 clat (usec): min=6359, max=26957, avg=10793.33, stdev=4868.05 00:10:26.464 lat (usec): min=7243, max=26960, avg=10876.48, stdev=4898.17 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8225], 00:10:26.464 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:10:26.464 | 70.00th=[ 8848], 80.00th=[12387], 90.00th=[21103], 95.00th=[22152], 00:10:26.464 | 99.00th=[23462], 99.50th=[24511], 99.90th=[26346], 99.95th=[26870], 00:10:26.464 | 99.99th=[26870] 00:10:26.464 write: IOPS=6160, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1005msec); 0 zone resets 00:10:26.464 slat (usec): min=2, max=2695, avg=75.94, stdev=265.19 00:10:26.464 clat (usec): min=4091, max=25929, avg=9808.45, stdev=3856.23 00:10:26.464 lat (usec): min=4842, max=25941, avg=9884.38, stdev=3878.05 00:10:26.464 clat percentiles (usec): 00:10:26.464 | 1.00th=[ 6915], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7898], 00:10:26.465 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8291], 00:10:26.465 | 70.00th=[ 8455], 80.00th=[11863], 90.00th=[12387], 95.00th=[21365], 00:10:26.465 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:10:26.465 | 99.99th=[25822] 00:10:26.465 bw ( KiB/s): min=19920, max=29232, per=25.32%, avg=24576.00, stdev=6584.58, samples=2 00:10:26.465 iops : min= 4980, max= 7308, avg=6144.00, stdev=1646.14, samples=2 00:10:26.465 lat (msec) : 10=75.55%, 20=14.43%, 50=10.02% 00:10:26.465 cpu : usr=2.49%, sys=4.38%, ctx=1234, majf=0, minf=1 00:10:26.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:26.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.465 issued rwts: total=6144,6191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.465 job3: (groupid=0, jobs=1): err= 0: pid=192004: Sun Nov 17 04:27:05 2024 00:10:26.465 read: IOPS=3974, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1006msec) 00:10:26.465 slat (usec): min=2, max=8172, avg=126.92, stdev=626.12 00:10:26.465 clat (usec): min=4928, max=30592, avg=16307.59, stdev=4399.45 00:10:26.465 lat (usec): min=7585, max=30603, avg=16434.51, stdev=4454.75 00:10:26.465 clat percentiles (usec): 00:10:26.465 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[12387], 00:10:26.465 | 30.00th=[13042], 40.00th=[13829], 50.00th=[15533], 60.00th=[18220], 00:10:26.465 | 70.00th=[19268], 80.00th=[21365], 90.00th=[22152], 95.00th=[22414], 00:10:26.465 | 99.00th=[25297], 99.50th=[27395], 99.90th=[30016], 99.95th=[30278], 00:10:26.465 | 99.99th=[30540] 00:10:26.465 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:26.465 slat (usec): min=2, max=8276, avg=116.61, stdev=557.21 00:10:26.465 clat (usec): min=7157, max=28826, avg=15187.92, stdev=3427.65 00:10:26.465 lat (usec): min=7160, max=28836, avg=15304.53, stdev=3477.42 00:10:26.465 clat percentiles (usec): 00:10:26.465 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[11469], 20.00th=[12387], 00:10:26.465 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14222], 60.00th=[16057], 00:10:26.465 | 70.00th=[17433], 80.00th=[18220], 90.00th=[20317], 95.00th=[21365], 00:10:26.465 | 99.00th=[22152], 99.50th=[24773], 99.90th=[25822], 99.95th=[26084], 00:10:26.465 | 99.99th=[28705] 00:10:26.465 bw ( KiB/s): min=16384, max=16384, per=16.88%, avg=16384.00, stdev= 0.00, samples=2 00:10:26.465 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:26.465 lat (msec) : 10=5.04%, 20=75.98%, 50=18.98% 00:10:26.465 cpu : usr=1.99%, sys=3.28%, ctx=648, majf=0, minf=1 00:10:26.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:26.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.465 issued rwts: total=3998,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.465 00:10:26.465 Run status group 0 (all jobs): 00:10:26.465 READ: bw=92.6MiB/s (97.1MB/s), 15.5MiB/s-29.4MiB/s (16.3MB/s-30.8MB/s), io=93.1MiB (97.7MB), run=1005-1006msec 00:10:26.465 WRITE: bw=94.8MiB/s (99.4MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=95.4MiB (100.0MB), run=1005-1006msec 00:10:26.465 00:10:26.465 Disk stats (read/write): 00:10:26.465 nvme0n1: ios=5681/5958, merge=0/0, ticks=12976/13172, in_queue=26148, util=84.37% 00:10:26.465 nvme0n2: ios=5632/6076, merge=0/0, ticks=45230/46534, in_queue=91764, util=85.22% 00:10:26.465 nvme0n3: ios=5632/5703, merge=0/0, ticks=13542/12547, in_queue=26089, util=88.37% 00:10:26.465 nvme0n4: ios=3072/3263, merge=0/0, ticks=26229/25502, in_queue=51731, util=89.41% 00:10:26.465 04:27:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:26.465 [global] 00:10:26.465 thread=1 00:10:26.465 invalidate=1 00:10:26.465 rw=randwrite 00:10:26.465 time_based=1 00:10:26.465 runtime=1 00:10:26.465 ioengine=libaio 00:10:26.465 direct=1 00:10:26.465 bs=4096 00:10:26.465 iodepth=128 00:10:26.465 norandommap=0 00:10:26.465 numjobs=1 00:10:26.465 00:10:26.465 verify_dump=1 00:10:26.465 verify_backlog=512 00:10:26.465 verify_state_save=0 00:10:26.465 do_verify=1 00:10:26.465 verify=crc32c-intel 00:10:26.465 [job0] 00:10:26.465 filename=/dev/nvme0n1 00:10:26.465 [job1] 00:10:26.465 filename=/dev/nvme0n2 00:10:26.465 [job2] 00:10:26.465 filename=/dev/nvme0n3 00:10:26.465 [job3] 00:10:26.465 filename=/dev/nvme0n4 00:10:26.465 Could not set queue depth (nvme0n1) 00:10:26.465 Could not set queue depth (nvme0n2) 00:10:26.465 Could not set queue depth (nvme0n3) 00:10:26.465 Could not set queue depth (nvme0n4) 00:10:26.724 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.724 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.724 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.724 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.724 fio-3.35 00:10:26.724 Starting 4 threads 00:10:28.109 00:10:28.109 job0: (groupid=0, jobs=1): err= 0: pid=192720: Sun Nov 17 04:27:06 2024 00:10:28.109 read: IOPS=3220, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1003msec) 00:10:28.109 slat (nsec): min=2000, max=4160.8k, avg=145902.10, stdev=453716.33 00:10:28.109 clat (usec): min=2543, max=24668, avg=18488.54, stdev=2772.29 00:10:28.109 lat (usec): min=3211, max=27006, avg=18634.44, stdev=2774.53 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 6849], 5.00th=[16319], 10.00th=[16909], 20.00th=[17433], 00:10:28.109 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:10:28.109 | 70.00th=[18482], 80.00th=[19006], 90.00th=[23462], 95.00th=[23725], 00:10:28.109 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:10:28.109 | 99.99th=[24773] 00:10:28.109 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:10:28.109 slat (usec): min=2, max=4247, avg=143.93, stdev=443.39 00:10:28.109 clat (usec): min=14639, max=27531, avg=18640.78, stdev=2422.71 00:10:28.109 lat (usec): min=15078, max=27552, avg=18784.71, stdev=2433.17 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[15795], 5.00th=[16450], 10.00th=[16712], 20.00th=[16909], 00:10:28.109 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:10:28.109 | 70.00th=[18220], 80.00th=[21103], 90.00th=[23200], 95.00th=[23725], 00:10:28.109 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26608], 99.95th=[26608], 00:10:28.109 | 99.99th=[27657] 00:10:28.109 bw ( KiB/s): min=12384, max=16288, per=14.43%, avg=14336.00, stdev=2760.54, samples=2 00:10:28.109 iops : min= 3096, max= 4072, avg=3584.00, stdev=690.14, samples=2 00:10:28.109 lat (msec) : 4=0.12%, 10=0.57%, 20=80.47%, 50=18.84% 00:10:28.109 cpu : usr=1.20%, sys=2.69%, ctx=827, majf=0, minf=1 00:10:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.109 issued rwts: total=3230,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.109 job1: (groupid=0, jobs=1): err= 0: pid=192743: Sun Nov 17 04:27:06 2024 00:10:28.109 read: IOPS=9708, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec) 00:10:28.109 slat (usec): min=2, max=966, avg=51.42, stdev=188.31 00:10:28.109 clat (usec): min=3057, max=8167, avg=6690.13, stdev=670.82 00:10:28.109 lat (usec): min=3060, max=8291, avg=6741.55, stdev=654.82 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5800], 00:10:28.109 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:10:28.109 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7308], 95.00th=[ 7439], 00:10:28.109 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 8029], 99.95th=[ 8094], 00:10:28.109 | 99.99th=[ 8160] 00:10:28.109 write: IOPS=9752, BW=38.1MiB/s (39.9MB/s)(38.2MiB/1002msec); 0 zone resets 00:10:28.109 slat (usec): min=2, max=1008, avg=48.32, stdev=174.47 00:10:28.109 clat (usec): min=553, max=7820, avg=6322.93, stdev=683.14 00:10:28.109 lat (usec): min=1250, max=7823, avg=6371.25, stdev=668.99 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 4817], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5538], 00:10:28.109 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6652], 00:10:28.109 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7046], 00:10:28.109 | 99.00th=[ 7373], 99.50th=[ 7439], 99.90th=[ 7701], 99.95th=[ 7832], 00:10:28.109 | 99.99th=[ 7832] 00:10:28.109 bw ( KiB/s): min=36864, max=40960, per=39.16%, avg=38912.00, stdev=2896.31, samples=2 00:10:28.109 iops : min= 9216, max=10240, avg=9728.00, stdev=724.08, samples=2 00:10:28.109 lat (usec) : 750=0.01% 00:10:28.109 lat (msec) : 2=0.09%, 4=0.24%, 10=99.67% 00:10:28.109 cpu : usr=3.00%, sys=6.79%, ctx=1247, majf=0, minf=1 00:10:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.109 issued rwts: total=9728,9772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.109 job2: (groupid=0, jobs=1): err= 0: pid=192763: Sun Nov 17 04:27:06 2024 00:10:28.109 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:10:28.109 slat (usec): min=2, max=1233, avg=72.33, stdev=236.46 00:10:28.109 clat (usec): min=4712, max=18738, avg=9274.73, stdev=2726.40 00:10:28.109 lat (usec): min=4719, max=18863, avg=9347.06, stdev=2751.12 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 7177], 5.00th=[ 7570], 10.00th=[ 7701], 20.00th=[ 7832], 00:10:28.109 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:10:28.109 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[11600], 95.00th=[17695], 00:10:28.109 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[18220], 00:10:28.109 | 99.99th=[18744] 00:10:28.109 write: IOPS=6937, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1002msec); 0 zone resets 00:10:28.109 slat (usec): min=2, max=1968, avg=71.55, stdev=232.24 00:10:28.109 clat (usec): min=965, max=18554, avg=9333.44, stdev=3296.79 00:10:28.109 lat (usec): min=1680, max=18745, avg=9404.99, stdev=3323.26 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 5407], 5.00th=[ 7242], 10.00th=[ 7373], 20.00th=[ 7570], 00:10:28.109 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:10:28.109 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[16909], 95.00th=[17433], 00:10:28.109 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18482], 00:10:28.109 | 99.99th=[18482] 00:10:28.109 bw ( KiB/s): min=24576, max=30008, per=27.47%, avg=27292.00, stdev=3841.00, samples=2 00:10:28.109 iops : min= 6144, max= 7502, avg=6823.00, stdev=960.25, samples=2 00:10:28.109 lat (usec) : 1000=0.01% 00:10:28.109 lat (msec) : 2=0.15%, 4=0.16%, 10=86.83%, 20=12.85% 00:10:28.109 cpu : usr=2.50%, sys=5.00%, ctx=1176, majf=0, minf=1 00:10:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.109 issued rwts: total=6656,6951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.109 job3: (groupid=0, jobs=1): err= 0: pid=192770: Sun Nov 17 04:27:06 2024 00:10:28.109 read: IOPS=4343, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1003msec) 00:10:28.109 slat (usec): min=2, max=2987, avg=111.66, stdev=335.40 00:10:28.109 clat (usec): min=2492, max=19378, avg=14188.35, stdev=1977.93 00:10:28.109 lat (usec): min=3214, max=19388, avg=14300.02, stdev=1997.36 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[ 6849], 5.00th=[12518], 10.00th=[12911], 20.00th=[13173], 00:10:28.109 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:10:28.109 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17695], 95.00th=[17957], 00:10:28.109 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:28.109 | 99.99th=[19268] 00:10:28.109 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:28.109 slat (usec): min=2, max=2681, avg=108.12, stdev=315.53 00:10:28.109 clat (usec): min=11270, max=18542, avg=14091.80, stdev=1713.18 00:10:28.109 lat (usec): min=11279, max=18744, avg=14199.91, stdev=1727.12 00:10:28.109 clat percentiles (usec): 00:10:28.109 | 1.00th=[11863], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:10:28.109 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:10:28.109 | 70.00th=[14091], 80.00th=[16450], 90.00th=[17433], 95.00th=[17433], 00:10:28.109 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:10:28.109 | 99.99th=[18482] 00:10:28.110 bw ( KiB/s): min=16384, max=20480, per=18.55%, avg=18432.00, stdev=2896.31, samples=2 00:10:28.110 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:28.110 lat (msec) : 4=0.15%, 10=0.78%, 20=99.07% 00:10:28.110 cpu : usr=1.40%, sys=3.99%, ctx=1043, majf=0, minf=1 00:10:28.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:28.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.110 issued rwts: total=4357,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.110 00:10:28.110 Run status group 0 (all jobs): 00:10:28.110 READ: bw=93.4MiB/s (97.9MB/s), 12.6MiB/s-37.9MiB/s (13.2MB/s-39.8MB/s), io=93.6MiB (98.2MB), run=1002-1003msec 00:10:28.110 WRITE: bw=97.0MiB/s (102MB/s), 14.0MiB/s-38.1MiB/s (14.6MB/s-39.9MB/s), io=97.3MiB (102MB), run=1002-1003msec 00:10:28.110 00:10:28.110 Disk stats (read/write): 00:10:28.110 nvme0n1: ios=2609/2974, merge=0/0, ticks=12150/14054, in_queue=26204, util=83.57% 00:10:28.110 nvme0n2: ios=8078/8192, merge=0/0, ticks=13272/12604, in_queue=25876, util=85.01% 00:10:28.110 nvme0n3: ios=5275/5632, merge=0/0, ticks=13676/14848, in_queue=28524, util=88.18% 00:10:28.110 nvme0n4: ios=3584/3691, merge=0/0, ticks=17022/17249, in_queue=34271, util=89.32% 00:10:28.110 04:27:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:28.110 04:27:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=192931 00:10:28.110 04:27:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:28.110 04:27:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:28.110 [global] 00:10:28.110 thread=1 00:10:28.110 invalidate=1 00:10:28.110 rw=read 00:10:28.110 time_based=1 00:10:28.110 runtime=10 00:10:28.110 ioengine=libaio 00:10:28.110 direct=1 00:10:28.110 bs=4096 00:10:28.110 iodepth=1 00:10:28.110 norandommap=1 00:10:28.110 numjobs=1 00:10:28.110 00:10:28.110 [job0] 00:10:28.110 filename=/dev/nvme0n1 00:10:28.110 [job1] 00:10:28.110 filename=/dev/nvme0n2 00:10:28.110 [job2] 00:10:28.110 filename=/dev/nvme0n3 00:10:28.110 [job3] 00:10:28.110 filename=/dev/nvme0n4 00:10:28.110 Could not set queue depth (nvme0n1) 00:10:28.110 Could not set queue depth (nvme0n2) 00:10:28.110 Could not set queue depth (nvme0n3) 00:10:28.110 Could not set queue depth (nvme0n4) 00:10:28.370 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.370 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.370 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.370 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.370 fio-3.35 00:10:28.370 Starting 4 threads 00:10:31.671 04:27:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:31.671 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=72359936, buflen=4096 00:10:31.671 fio: pid=193306, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:31.671 04:27:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:31.671 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=89341952, buflen=4096 00:10:31.671 fio: pid=193300, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:31.671 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.671 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:31.671 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55910400, buflen=4096 00:10:31.671 fio: pid=193262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:31.671 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.671 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:31.932 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5259264, buflen=4096 00:10:31.932 fio: pid=193280, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:31.932 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.932 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:31.932 00:10:31.932 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=193262: Sun Nov 17 04:27:10 2024 00:10:31.932 read: IOPS=9841, BW=38.4MiB/s (40.3MB/s)(117MiB/3052msec) 00:10:31.932 slat (usec): min=7, max=18717, avg=11.00, stdev=196.33 00:10:31.932 clat (usec): min=48, max=212, avg=88.77, stdev=23.69 00:10:31.932 lat (usec): min=56, max=18815, avg=99.77, stdev=197.76 00:10:31.932 clat percentiles (usec): 00:10:31.932 | 1.00th=[ 57], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 75], 00:10:31.932 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:10:31.932 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 133], 95.00th=[ 139], 00:10:31.932 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 192], 00:10:31.932 | 99.99th=[ 200] 00:10:31.932 bw ( KiB/s): min=28008, max=45536, per=31.15%, avg=39267.20, stdev=8007.84, samples=5 00:10:31.932 iops : min= 7002, max=11384, avg=9816.80, stdev=2001.96, samples=5 00:10:31.932 lat (usec) : 50=0.08%, 100=79.91%, 250=20.01% 00:10:31.932 cpu : usr=5.08%, sys=13.41%, ctx=30042, majf=0, minf=1 00:10:31.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 issued rwts: total=30035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.932 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=193280: Sun Nov 17 04:27:10 2024 00:10:31.932 read: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(133MiB/3286msec) 00:10:31.932 slat (usec): min=2, max=18799, avg=11.16, stdev=192.18 00:10:31.932 clat (usec): min=43, max=21595, avg=83.13, stdev=127.30 00:10:31.932 lat (usec): min=47, max=21603, avg=94.29, stdev=230.56 00:10:31.932 clat percentiles (usec): 00:10:31.932 | 1.00th=[ 54], 5.00th=[ 60], 10.00th=[ 70], 20.00th=[ 75], 00:10:31.932 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:10:31.932 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 90], 95.00th=[ 139], 00:10:31.932 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 206], 99.95th=[ 212], 00:10:31.932 | 99.99th=[ 1029] 00:10:31.932 bw ( KiB/s): min=31968, max=44584, per=32.67%, avg=41181.67, stdev=5380.20, samples=6 00:10:31.932 iops : min= 7992, max=11146, avg=10295.33, stdev=1345.12, samples=6 00:10:31.932 lat (usec) : 50=0.38%, 100=92.83%, 250=6.76%, 500=0.01% 00:10:31.932 lat (msec) : 2=0.01%, 10=0.01%, 50=0.01% 00:10:31.932 cpu : usr=5.30%, sys=13.88%, ctx=34060, majf=0, minf=2 00:10:31.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 issued rwts: total=34053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.932 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=193300: Sun Nov 17 04:27:10 2024 00:10:31.932 read: IOPS=7637, BW=29.8MiB/s (31.3MB/s)(85.2MiB/2856msec) 00:10:31.932 slat (usec): min=2, max=15800, avg=10.31, stdev=146.96 00:10:31.932 clat (usec): min=57, max=227, avg=118.25, stdev=28.96 00:10:31.932 lat (usec): min=61, max=15894, avg=128.56, stdev=149.59 00:10:31.932 clat percentiles (usec): 00:10:31.932 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 89], 00:10:31.932 | 30.00th=[ 93], 40.00th=[ 98], 50.00th=[ 121], 60.00th=[ 135], 00:10:31.932 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:10:31.932 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 210], 00:10:31.932 | 99.99th=[ 217] 00:10:31.932 bw ( KiB/s): min=24992, max=39768, per=24.51%, avg=30896.00, stdev=6451.88, samples=5 00:10:31.932 iops : min= 6248, max= 9942, avg=7724.00, stdev=1612.97, samples=5 00:10:31.932 lat (usec) : 100=41.59%, 250=58.40% 00:10:31.932 cpu : usr=4.03%, sys=10.33%, ctx=21816, majf=0, minf=2 00:10:31.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 issued rwts: total=21813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.932 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=193306: Sun Nov 17 04:27:10 2024 00:10:31.932 read: IOPS=6674, BW=26.1MiB/s (27.3MB/s)(69.0MiB/2647msec) 00:10:31.932 slat (nsec): min=8387, max=51469, avg=9876.82, stdev=2310.10 00:10:31.932 clat (usec): min=73, max=248, avg=137.25, stdev=17.20 00:10:31.932 lat (usec): min=85, max=257, avg=147.12, stdev=17.12 00:10:31.932 clat percentiles (usec): 00:10:31.932 | 1.00th=[ 94], 5.00th=[ 111], 10.00th=[ 120], 20.00th=[ 127], 00:10:31.932 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:10:31.932 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 169], 00:10:31.932 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 221], 00:10:31.932 | 99.99th=[ 231] 00:10:31.932 bw ( KiB/s): min=24952, max=28008, per=21.38%, avg=26958.40, stdev=1207.52, samples=5 00:10:31.932 iops : min= 6238, max= 7002, avg=6739.60, stdev=301.88, samples=5 00:10:31.932 lat (usec) : 100=2.68%, 250=97.31% 00:10:31.932 cpu : usr=3.33%, sys=9.52%, ctx=17667, majf=0, minf=2 00:10:31.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.932 issued rwts: total=17667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.932 00:10:31.932 Run status group 0 (all jobs): 00:10:31.932 READ: bw=123MiB/s (129MB/s), 26.1MiB/s-40.5MiB/s (27.3MB/s-42.4MB/s), io=405MiB (424MB), run=2647-3286msec 00:10:31.932 00:10:31.932 Disk stats (read/write): 00:10:31.932 nvme0n1: ios=27866/0, merge=0/0, ticks=2254/0, in_queue=2254, util=93.62% 00:10:31.932 nvme0n2: ios=31639/0, merge=0/0, ticks=2424/0, in_queue=2424, util=93.21% 00:10:31.932 nvme0n3: ios=21812/0, merge=0/0, ticks=2371/0, in_queue=2371, util=95.41% 00:10:31.932 nvme0n4: ios=17394/0, merge=0/0, ticks=2189/0, in_queue=2189, util=96.46% 00:10:32.194 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.194 04:27:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:32.454 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.455 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:32.455 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.455 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:32.715 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.715 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:32.975 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:32.975 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 192931 00:10:32.975 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:32.975 04:27:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:33.913 nvmf hotplug test: fio failed as expected 00:10:33.913 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:34.173 rmmod nvme_rdma 00:10:34.173 rmmod nvme_fabrics 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 189513 ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 189513 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 189513 ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 189513 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 189513 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 189513' 00:10:34.173 killing process with pid 189513 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 189513 00:10:34.173 04:27:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 189513 00:10:34.433 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.433 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:34.433 00:10:34.433 real 0m27.239s 00:10:34.433 user 2m6.176s 00:10:34.433 sys 0m10.697s 00:10:34.433 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.433 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.433 ************************************ 00:10:34.433 END TEST nvmf_fio_target 00:10:34.433 ************************************ 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.748 ************************************ 00:10:34.748 START TEST nvmf_bdevio 00:10:34.748 ************************************ 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:34.748 * Looking for test storage... 00:10:34.748 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.748 --rc genhtml_branch_coverage=1 00:10:34.748 --rc genhtml_function_coverage=1 00:10:34.748 --rc genhtml_legend=1 00:10:34.748 --rc geninfo_all_blocks=1 00:10:34.748 --rc geninfo_unexecuted_blocks=1 00:10:34.748 00:10:34.748 ' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.748 --rc genhtml_branch_coverage=1 00:10:34.748 --rc genhtml_function_coverage=1 00:10:34.748 --rc genhtml_legend=1 00:10:34.748 --rc geninfo_all_blocks=1 00:10:34.748 --rc geninfo_unexecuted_blocks=1 00:10:34.748 00:10:34.748 ' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.748 --rc genhtml_branch_coverage=1 00:10:34.748 --rc genhtml_function_coverage=1 00:10:34.748 --rc genhtml_legend=1 00:10:34.748 --rc geninfo_all_blocks=1 00:10:34.748 --rc geninfo_unexecuted_blocks=1 00:10:34.748 00:10:34.748 ' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.748 --rc genhtml_branch_coverage=1 00:10:34.748 --rc genhtml_function_coverage=1 00:10:34.748 --rc genhtml_legend=1 00:10:34.748 --rc geninfo_all_blocks=1 00:10:34.748 --rc geninfo_unexecuted_blocks=1 00:10:34.748 00:10:34.748 ' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.748 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.749 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.749 04:27:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.882 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.882 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:42.882 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:42.883 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:42.883 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:42.883 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:42.883 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:42.883 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:42.884 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.884 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:42.884 altname enp217s0f0np0 00:10:42.884 altname ens818f0np0 00:10:42.884 inet 192.168.100.8/24 scope global mlx_0_0 00:10:42.884 valid_lft forever preferred_lft forever 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:42.884 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.884 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:42.884 altname enp217s0f1np1 00:10:42.884 altname ens818f1np1 00:10:42.884 inet 192.168.100.9/24 scope global mlx_0_1 00:10:42.884 valid_lft forever preferred_lft forever 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:42.884 192.168.100.9' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:42.884 192.168.100.9' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:42.884 192.168.100.9' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=197584 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 197584 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 197584 ']' 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.884 04:27:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.884 [2024-11-17 04:27:20.896759] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:42.884 [2024-11-17 04:27:20.896819] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.884 [2024-11-17 04:27:20.987237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.884 [2024-11-17 04:27:21.009276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.884 [2024-11-17 04:27:21.009315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.884 [2024-11-17 04:27:21.009324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.884 [2024-11-17 04:27:21.009333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.884 [2024-11-17 04:27:21.009356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.884 [2024-11-17 04:27:21.011202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.884 [2024-11-17 04:27:21.011312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.884 [2024-11-17 04:27:21.011421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.884 [2024-11-17 04:27:21.011423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.884 [2024-11-17 04:27:21.175965] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21da550/0x21dea00) succeed. 00:10:42.884 [2024-11-17 04:27:21.185184] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21dbb90/0x22200a0) succeed. 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.884 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 Malloc0 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 [2024-11-17 04:27:21.371404] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:42.885 { 00:10:42.885 "params": { 00:10:42.885 "name": "Nvme$subsystem", 00:10:42.885 "trtype": "$TEST_TRANSPORT", 00:10:42.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.885 "adrfam": "ipv4", 00:10:42.885 "trsvcid": "$NVMF_PORT", 00:10:42.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.885 "hdgst": ${hdgst:-false}, 00:10:42.885 "ddgst": ${ddgst:-false} 00:10:42.885 }, 00:10:42.885 "method": "bdev_nvme_attach_controller" 00:10:42.885 } 00:10:42.885 EOF 00:10:42.885 )") 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:42.885 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:42.885 "params": { 00:10:42.885 "name": "Nvme1", 00:10:42.885 "trtype": "rdma", 00:10:42.885 "traddr": "192.168.100.8", 00:10:42.885 "adrfam": "ipv4", 00:10:42.885 "trsvcid": "4420", 00:10:42.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.885 "hdgst": false, 00:10:42.885 "ddgst": false 00:10:42.885 }, 00:10:42.885 "method": "bdev_nvme_attach_controller" 00:10:42.885 }' 00:10:42.885 [2024-11-17 04:27:21.419946] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:42.885 [2024-11-17 04:27:21.420000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197841 ] 00:10:42.885 [2024-11-17 04:27:21.512564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.885 [2024-11-17 04:27:21.537715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.885 [2024-11-17 04:27:21.537825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.885 [2024-11-17 04:27:21.537826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.885 I/O targets: 00:10:42.885 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:42.885 00:10:42.885 00:10:42.885 CUnit - A unit testing framework for C - Version 2.1-3 00:10:42.885 http://cunit.sourceforge.net/ 00:10:42.885 00:10:42.885 00:10:42.885 Suite: bdevio tests on: Nvme1n1 00:10:43.145 Test: blockdev write read block ...passed 00:10:43.146 Test: blockdev write zeroes read block ...passed 00:10:43.146 Test: blockdev write zeroes read no split ...passed 00:10:43.146 Test: blockdev write zeroes read split ...passed 00:10:43.146 Test: blockdev write zeroes read split partial ...passed 00:10:43.146 Test: blockdev reset ...[2024-11-17 04:27:21.736616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:43.146 [2024-11-17 04:27:21.759195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:10:43.146 [2024-11-17 04:27:21.786112] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:43.146 passed 00:10:43.146 Test: blockdev write read 8 blocks ...passed 00:10:43.146 Test: blockdev write read size > 128k ...passed 00:10:43.146 Test: blockdev write read invalid size ...passed 00:10:43.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.146 Test: blockdev write read max offset ...passed 00:10:43.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.146 Test: blockdev writev readv 8 blocks ...passed 00:10:43.146 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.146 Test: blockdev writev readv block ...passed 00:10:43.146 Test: blockdev writev readv size > 128k ...passed 00:10:43.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.146 Test: blockdev comparev and writev ...[2024-11-17 04:27:21.789507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.789548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.789718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.789740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.789920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.789946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.789955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.790113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.790123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.146 [2024-11-17 04:27:21.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.146 passed 00:10:43.146 Test: blockdev nvme passthru rw ...passed 00:10:43.146 Test: blockdev nvme passthru vendor specific ...[2024-11-17 04:27:21.790436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.146 [2024-11-17 04:27:21.790449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.790492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.146 [2024-11-17 04:27:21.790503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.790540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.146 [2024-11-17 04:27:21.790550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.146 [2024-11-17 04:27:21.790594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.146 [2024-11-17 04:27:21.790604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.146 passed 00:10:43.146 Test: blockdev nvme admin passthru ...passed 00:10:43.146 Test: blockdev copy ...passed 00:10:43.146 00:10:43.146 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.146 suites 1 1 n/a 0 0 00:10:43.146 tests 23 23 23 0 0 00:10:43.146 asserts 152 152 152 0 n/a 00:10:43.146 00:10:43.146 Elapsed time = 0.172 seconds 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.146 04:27:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:43.146 rmmod nvme_rdma 00:10:43.407 rmmod nvme_fabrics 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 197584 ']' 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 197584 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 197584 ']' 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 197584 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197584 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197584' 00:10:43.407 killing process with pid 197584 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 197584 00:10:43.407 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 197584 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:43.668 00:10:43.668 real 0m9.027s 00:10:43.668 user 0m8.339s 00:10:43.668 sys 0m6.129s 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.668 ************************************ 00:10:43.668 END TEST nvmf_bdevio 00:10:43.668 ************************************ 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:43.668 00:10:43.668 real 4m14.062s 00:10:43.668 user 10m47.509s 00:10:43.668 sys 1m40.080s 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.668 ************************************ 00:10:43.668 END TEST nvmf_target_core 00:10:43.668 ************************************ 00:10:43.668 04:27:22 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:43.668 04:27:22 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.668 04:27:22 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.668 04:27:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:43.668 ************************************ 00:10:43.668 START TEST nvmf_target_extra 00:10:43.668 ************************************ 00:10:43.668 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:43.929 * Looking for test storage... 00:10:43.929 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.929 --rc genhtml_branch_coverage=1 00:10:43.929 --rc genhtml_function_coverage=1 00:10:43.929 --rc genhtml_legend=1 00:10:43.929 --rc geninfo_all_blocks=1 00:10:43.929 --rc geninfo_unexecuted_blocks=1 00:10:43.929 00:10:43.929 ' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.929 --rc genhtml_branch_coverage=1 00:10:43.929 --rc genhtml_function_coverage=1 00:10:43.929 --rc genhtml_legend=1 00:10:43.929 --rc geninfo_all_blocks=1 00:10:43.929 --rc geninfo_unexecuted_blocks=1 00:10:43.929 00:10:43.929 ' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.929 --rc genhtml_branch_coverage=1 00:10:43.929 --rc genhtml_function_coverage=1 00:10:43.929 --rc genhtml_legend=1 00:10:43.929 --rc geninfo_all_blocks=1 00:10:43.929 --rc geninfo_unexecuted_blocks=1 00:10:43.929 00:10:43.929 ' 00:10:43.929 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.929 --rc genhtml_branch_coverage=1 00:10:43.929 --rc genhtml_function_coverage=1 00:10:43.929 --rc genhtml_legend=1 00:10:43.929 --rc geninfo_all_blocks=1 00:10:43.929 --rc geninfo_unexecuted_blocks=1 00:10:43.929 00:10:43.929 ' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.930 ************************************ 00:10:43.930 START TEST nvmf_example 00:10:43.930 ************************************ 00:10:43.930 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:44.191 * Looking for test storage... 00:10:44.191 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.191 --rc genhtml_branch_coverage=1 00:10:44.191 --rc genhtml_function_coverage=1 00:10:44.191 --rc genhtml_legend=1 00:10:44.191 --rc geninfo_all_blocks=1 00:10:44.191 --rc geninfo_unexecuted_blocks=1 00:10:44.191 00:10:44.191 ' 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.191 --rc genhtml_branch_coverage=1 00:10:44.191 --rc genhtml_function_coverage=1 00:10:44.191 --rc genhtml_legend=1 00:10:44.191 --rc geninfo_all_blocks=1 00:10:44.191 --rc geninfo_unexecuted_blocks=1 00:10:44.191 00:10:44.191 ' 00:10:44.191 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.192 --rc genhtml_branch_coverage=1 00:10:44.192 --rc genhtml_function_coverage=1 00:10:44.192 --rc genhtml_legend=1 00:10:44.192 --rc geninfo_all_blocks=1 00:10:44.192 --rc geninfo_unexecuted_blocks=1 00:10:44.192 00:10:44.192 ' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.192 --rc genhtml_branch_coverage=1 00:10:44.192 --rc genhtml_function_coverage=1 00:10:44.192 --rc genhtml_legend=1 00:10:44.192 --rc geninfo_all_blocks=1 00:10:44.192 --rc geninfo_unexecuted_blocks=1 00:10:44.192 00:10:44.192 ' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.192 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.192 04:27:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:52.329 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:52.329 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:52.329 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:52.329 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:52.330 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.330 04:27:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:52.330 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.330 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:52.330 altname enp217s0f0np0 00:10:52.330 altname ens818f0np0 00:10:52.330 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.330 valid_lft forever preferred_lft forever 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:52.330 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.330 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:52.330 altname enp217s0f1np1 00:10:52.330 altname ens818f1np1 00:10:52.330 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.330 valid_lft forever preferred_lft forever 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.330 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:52.331 192.168.100.9' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:52.331 192.168.100.9' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:52.331 192.168.100.9' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=201363 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 201363 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 201363 ']' 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.331 04:27:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.331 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:52.591 04:27:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:04.816 Initializing NVMe Controllers 00:11:04.816 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:04.816 Initialization complete. Launching workers. 00:11:04.816 ======================================================== 00:11:04.816 Latency(us) 00:11:04.816 Device Information : IOPS MiB/s Average min max 00:11:04.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 27145.20 106.04 2357.22 625.98 12083.16 00:11:04.816 ======================================================== 00:11:04.816 Total : 27145.20 106.04 2357.22 625.98 12083.16 00:11:04.816 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:04.816 rmmod nvme_rdma 00:11:04.816 rmmod nvme_fabrics 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 201363 ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 201363 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 201363 ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 201363 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201363 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201363' 00:11:04.816 killing process with pid 201363 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 201363 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 201363 00:11:04.816 nvmf threads initialize successfully 00:11:04.816 bdev subsystem init successfully 00:11:04.816 created a nvmf target service 00:11:04.816 create targets's poll groups done 00:11:04.816 all subsystems of target started 00:11:04.816 nvmf target is running 00:11:04.816 all subsystems of target stopped 00:11:04.816 destroy targets's poll groups done 00:11:04.816 destroyed the nvmf target service 00:11:04.816 bdev subsystem finish successfully 00:11:04.816 nvmf threads destroy successfully 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.816 04:27:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.816 00:11:04.816 real 0m20.267s 00:11:04.816 user 0m52.410s 00:11:04.816 sys 0m6.069s 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.816 ************************************ 00:11:04.816 END TEST nvmf_example 00:11:04.816 ************************************ 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.816 ************************************ 00:11:04.816 START TEST nvmf_filesystem 00:11:04.816 ************************************ 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:04.816 * Looking for test storage... 00:11:04.816 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:04.816 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.817 --rc genhtml_branch_coverage=1 00:11:04.817 --rc genhtml_function_coverage=1 00:11:04.817 --rc genhtml_legend=1 00:11:04.817 --rc geninfo_all_blocks=1 00:11:04.817 --rc geninfo_unexecuted_blocks=1 00:11:04.817 00:11:04.817 ' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.817 --rc genhtml_branch_coverage=1 00:11:04.817 --rc genhtml_function_coverage=1 00:11:04.817 --rc genhtml_legend=1 00:11:04.817 --rc geninfo_all_blocks=1 00:11:04.817 --rc geninfo_unexecuted_blocks=1 00:11:04.817 00:11:04.817 ' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.817 --rc genhtml_branch_coverage=1 00:11:04.817 --rc genhtml_function_coverage=1 00:11:04.817 --rc genhtml_legend=1 00:11:04.817 --rc geninfo_all_blocks=1 00:11:04.817 --rc geninfo_unexecuted_blocks=1 00:11:04.817 00:11:04.817 ' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.817 --rc genhtml_branch_coverage=1 00:11:04.817 --rc genhtml_function_coverage=1 00:11:04.817 --rc genhtml_legend=1 00:11:04.817 --rc geninfo_all_blocks=1 00:11:04.817 --rc geninfo_unexecuted_blocks=1 00:11:04.817 00:11:04.817 ' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:04.817 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:04.818 #define SPDK_CONFIG_H 00:11:04.818 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:04.818 #define SPDK_CONFIG_APPS 1 00:11:04.818 #define SPDK_CONFIG_ARCH native 00:11:04.818 #undef SPDK_CONFIG_ASAN 00:11:04.818 #undef SPDK_CONFIG_AVAHI 00:11:04.818 #undef SPDK_CONFIG_CET 00:11:04.818 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:04.818 #define SPDK_CONFIG_COVERAGE 1 00:11:04.818 #define SPDK_CONFIG_CROSS_PREFIX 00:11:04.818 #undef SPDK_CONFIG_CRYPTO 00:11:04.818 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:04.818 #undef SPDK_CONFIG_CUSTOMOCF 00:11:04.818 #undef SPDK_CONFIG_DAOS 00:11:04.818 #define SPDK_CONFIG_DAOS_DIR 00:11:04.818 #define SPDK_CONFIG_DEBUG 1 00:11:04.818 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:04.818 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:04.818 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:04.818 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:04.818 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:04.818 #undef SPDK_CONFIG_DPDK_UADK 00:11:04.818 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:04.818 #define SPDK_CONFIG_EXAMPLES 1 00:11:04.818 #undef SPDK_CONFIG_FC 00:11:04.818 #define SPDK_CONFIG_FC_PATH 00:11:04.818 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:04.818 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:04.818 #define SPDK_CONFIG_FSDEV 1 00:11:04.818 #undef SPDK_CONFIG_FUSE 00:11:04.818 #undef SPDK_CONFIG_FUZZER 00:11:04.818 #define SPDK_CONFIG_FUZZER_LIB 00:11:04.818 #undef SPDK_CONFIG_GOLANG 00:11:04.818 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:04.818 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:04.818 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:04.818 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:04.818 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:04.818 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:04.818 #undef SPDK_CONFIG_HAVE_LZ4 00:11:04.818 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:04.818 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:04.818 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:04.818 #define SPDK_CONFIG_IDXD 1 00:11:04.818 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:04.818 #undef SPDK_CONFIG_IPSEC_MB 00:11:04.818 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:04.818 #define SPDK_CONFIG_ISAL 1 00:11:04.818 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:04.818 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:04.818 #define SPDK_CONFIG_LIBDIR 00:11:04.818 #undef SPDK_CONFIG_LTO 00:11:04.818 #define SPDK_CONFIG_MAX_LCORES 128 00:11:04.818 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:04.818 #define SPDK_CONFIG_NVME_CUSE 1 00:11:04.818 #undef SPDK_CONFIG_OCF 00:11:04.818 #define SPDK_CONFIG_OCF_PATH 00:11:04.818 #define SPDK_CONFIG_OPENSSL_PATH 00:11:04.818 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:04.818 #define SPDK_CONFIG_PGO_DIR 00:11:04.818 #undef SPDK_CONFIG_PGO_USE 00:11:04.818 #define SPDK_CONFIG_PREFIX /usr/local 00:11:04.818 #undef SPDK_CONFIG_RAID5F 00:11:04.818 #undef SPDK_CONFIG_RBD 00:11:04.818 #define SPDK_CONFIG_RDMA 1 00:11:04.818 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:04.818 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:04.818 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:04.818 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:04.818 #define SPDK_CONFIG_SHARED 1 00:11:04.818 #undef SPDK_CONFIG_SMA 00:11:04.818 #define SPDK_CONFIG_TESTS 1 00:11:04.818 #undef SPDK_CONFIG_TSAN 00:11:04.818 #define SPDK_CONFIG_UBLK 1 00:11:04.818 #define SPDK_CONFIG_UBSAN 1 00:11:04.818 #undef SPDK_CONFIG_UNIT_TESTS 00:11:04.818 #undef SPDK_CONFIG_URING 00:11:04.818 #define SPDK_CONFIG_URING_PATH 00:11:04.818 #undef SPDK_CONFIG_URING_ZNS 00:11:04.818 #undef SPDK_CONFIG_USDT 00:11:04.818 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:04.818 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:04.818 #undef SPDK_CONFIG_VFIO_USER 00:11:04.818 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:04.818 #define SPDK_CONFIG_VHOST 1 00:11:04.818 #define SPDK_CONFIG_VIRTIO 1 00:11:04.818 #undef SPDK_CONFIG_VTUNE 00:11:04.818 #define SPDK_CONFIG_VTUNE_DIR 00:11:04.818 #define SPDK_CONFIG_WERROR 1 00:11:04.818 #define SPDK_CONFIG_WPDK_DIR 00:11:04.818 #undef SPDK_CONFIG_XNVME 00:11:04.818 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.818 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:04.819 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:04.820 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 203728 ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 203728 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2EXrLR 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2EXrLR/tests/target /tmp/spdk.2EXrLR 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55081005056 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730598912 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6649593856 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30851837952 00:11:04.821 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865297408 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323041280 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23080960 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30864990208 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:04.822 * Looking for test storage... 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55081005056 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8864186368 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.822 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.822 --rc genhtml_branch_coverage=1 00:11:04.822 --rc genhtml_function_coverage=1 00:11:04.822 --rc genhtml_legend=1 00:11:04.822 --rc geninfo_all_blocks=1 00:11:04.822 --rc geninfo_unexecuted_blocks=1 00:11:04.822 00:11:04.822 ' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.822 --rc genhtml_branch_coverage=1 00:11:04.822 --rc genhtml_function_coverage=1 00:11:04.822 --rc genhtml_legend=1 00:11:04.822 --rc geninfo_all_blocks=1 00:11:04.822 --rc geninfo_unexecuted_blocks=1 00:11:04.822 00:11:04.822 ' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.822 --rc genhtml_branch_coverage=1 00:11:04.822 --rc genhtml_function_coverage=1 00:11:04.822 --rc genhtml_legend=1 00:11:04.822 --rc geninfo_all_blocks=1 00:11:04.822 --rc geninfo_unexecuted_blocks=1 00:11:04.822 00:11:04.822 ' 00:11:04.822 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.822 --rc genhtml_branch_coverage=1 00:11:04.822 --rc genhtml_function_coverage=1 00:11:04.822 --rc genhtml_legend=1 00:11:04.823 --rc geninfo_all_blocks=1 00:11:04.823 --rc geninfo_unexecuted_blocks=1 00:11:04.823 00:11:04.823 ' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.823 04:27:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:12.957 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.957 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:12.958 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:12.958 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:12.958 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:12.958 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.958 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:12.958 altname enp217s0f0np0 00:11:12.958 altname ens818f0np0 00:11:12.958 inet 192.168.100.8/24 scope global mlx_0_0 00:11:12.958 valid_lft forever preferred_lft forever 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:12.958 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.958 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:12.958 altname enp217s0f1np1 00:11:12.958 altname ens818f1np1 00:11:12.958 inet 192.168.100.9/24 scope global mlx_0_1 00:11:12.958 valid_lft forever preferred_lft forever 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.958 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:12.959 192.168.100.9' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:12.959 192.168.100.9' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:12.959 192.168.100.9' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 ************************************ 00:11:12.959 START TEST nvmf_filesystem_no_in_capsule 00:11:12.959 ************************************ 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=207045 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 207045 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 207045 ']' 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.959 04:27:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 [2024-11-17 04:27:50.989785] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:12.959 [2024-11-17 04:27:50.989836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.959 [2024-11-17 04:27:51.084590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.959 [2024-11-17 04:27:51.107193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.959 [2024-11-17 04:27:51.107231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.959 [2024-11-17 04:27:51.107241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.959 [2024-11-17 04:27:51.107249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.959 [2024-11-17 04:27:51.107271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.959 [2024-11-17 04:27:51.108870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.959 [2024-11-17 04:27:51.108994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.959 [2024-11-17 04:27:51.109031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.959 [2024-11-17 04:27:51.109032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 [2024-11-17 04:27:51.249154] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:12.959 [2024-11-17 04:27:51.270611] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ffbc50/0x2000100) succeed. 00:11:12.959 [2024-11-17 04:27:51.280026] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ffd290/0x20417a0) succeed. 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 Malloc1 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.959 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.959 [2024-11-17 04:27:51.537299] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:12.960 { 00:11:12.960 "name": "Malloc1", 00:11:12.960 "aliases": [ 00:11:12.960 "753f6712-0f57-4f87-820c-013ec53f57bb" 00:11:12.960 ], 00:11:12.960 "product_name": "Malloc disk", 00:11:12.960 "block_size": 512, 00:11:12.960 "num_blocks": 1048576, 00:11:12.960 "uuid": "753f6712-0f57-4f87-820c-013ec53f57bb", 00:11:12.960 "assigned_rate_limits": { 00:11:12.960 "rw_ios_per_sec": 0, 00:11:12.960 "rw_mbytes_per_sec": 0, 00:11:12.960 "r_mbytes_per_sec": 0, 00:11:12.960 "w_mbytes_per_sec": 0 00:11:12.960 }, 00:11:12.960 "claimed": true, 00:11:12.960 "claim_type": "exclusive_write", 00:11:12.960 "zoned": false, 00:11:12.960 "supported_io_types": { 00:11:12.960 "read": true, 00:11:12.960 "write": true, 00:11:12.960 "unmap": true, 00:11:12.960 "flush": true, 00:11:12.960 "reset": true, 00:11:12.960 "nvme_admin": false, 00:11:12.960 "nvme_io": false, 00:11:12.960 "nvme_io_md": false, 00:11:12.960 "write_zeroes": true, 00:11:12.960 "zcopy": true, 00:11:12.960 "get_zone_info": false, 00:11:12.960 "zone_management": false, 00:11:12.960 "zone_append": false, 00:11:12.960 "compare": false, 00:11:12.960 "compare_and_write": false, 00:11:12.960 "abort": true, 00:11:12.960 "seek_hole": false, 00:11:12.960 "seek_data": false, 00:11:12.960 "copy": true, 00:11:12.960 "nvme_iov_md": false 00:11:12.960 }, 00:11:12.960 "memory_domains": [ 00:11:12.960 { 00:11:12.960 "dma_device_id": "system", 00:11:12.960 "dma_device_type": 1 00:11:12.960 }, 00:11:12.960 { 00:11:12.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.960 "dma_device_type": 2 00:11:12.960 } 00:11:12.960 ], 00:11:12.960 "driver_specific": {} 00:11:12.960 } 00:11:12.960 ]' 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.960 04:27:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:13.899 04:27:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.899 04:27:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.899 04:27:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.899 04:27:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.899 04:27:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:16.439 04:27:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.378 ************************************ 00:11:17.378 START TEST filesystem_ext4 00:11:17.378 ************************************ 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:17.378 04:27:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:17.378 mke2fs 1.47.0 (5-Feb-2023) 00:11:17.378 Discarding device blocks: 0/522240 done 00:11:17.378 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:17.378 Filesystem UUID: f9ac74ed-95d8-4b0e-80cc-d2cc644a0c24 00:11:17.378 Superblock backups stored on blocks: 00:11:17.378 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:17.378 00:11:17.378 Allocating group tables: 0/64 done 00:11:17.378 Writing inode tables: 0/64 done 00:11:17.378 Creating journal (8192 blocks): done 00:11:17.378 Writing superblocks and filesystem accounting information: 0/64 done 00:11:17.378 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 207045 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.378 00:11:17.378 real 0m0.236s 00:11:17.378 user 0m0.027s 00:11:17.378 sys 0m0.114s 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:17.378 ************************************ 00:11:17.378 END TEST filesystem_ext4 00:11:17.378 ************************************ 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.378 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.639 ************************************ 00:11:17.639 START TEST filesystem_btrfs 00:11:17.639 ************************************ 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:17.639 btrfs-progs v6.8.1 00:11:17.639 See https://btrfs.readthedocs.io for more information. 00:11:17.639 00:11:17.639 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:17.639 NOTE: several default settings have changed in version 5.15, please make sure 00:11:17.639 this does not affect your deployments: 00:11:17.639 - DUP for metadata (-m dup) 00:11:17.639 - enabled no-holes (-O no-holes) 00:11:17.639 - enabled free-space-tree (-R free-space-tree) 00:11:17.639 00:11:17.639 Label: (null) 00:11:17.639 UUID: 0f418540-788f-42ab-a09c-474ef07993d9 00:11:17.639 Node size: 16384 00:11:17.639 Sector size: 4096 (CPU page size: 4096) 00:11:17.639 Filesystem size: 510.00MiB 00:11:17.639 Block group profiles: 00:11:17.639 Data: single 8.00MiB 00:11:17.639 Metadata: DUP 32.00MiB 00:11:17.639 System: DUP 8.00MiB 00:11:17.639 SSD detected: yes 00:11:17.639 Zoned device: no 00:11:17.639 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:17.639 Checksum: crc32c 00:11:17.639 Number of devices: 1 00:11:17.639 Devices: 00:11:17.639 ID SIZE PATH 00:11:17.639 1 510.00MiB /dev/nvme0n1p1 00:11:17.639 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.639 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 207045 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.900 00:11:17.900 real 0m0.283s 00:11:17.900 user 0m0.033s 00:11:17.900 sys 0m0.157s 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 ************************************ 00:11:17.900 END TEST filesystem_btrfs 00:11:17.900 ************************************ 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 ************************************ 00:11:17.900 START TEST filesystem_xfs 00:11:17.900 ************************************ 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:17.900 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:17.900 = sectsz=512 attr=2, projid32bit=1 00:11:17.900 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:17.900 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:17.900 data = bsize=4096 blocks=130560, imaxpct=25 00:11:17.900 = sunit=0 swidth=0 blks 00:11:17.900 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:17.900 log =internal log bsize=4096 blocks=16384, version=2 00:11:17.900 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:17.900 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:17.900 Discarding blocks...Done. 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:17.900 04:27:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 207045 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.470 00:11:18.470 real 0m0.674s 00:11:18.470 user 0m0.033s 00:11:18.470 sys 0m0.106s 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:18.470 ************************************ 00:11:18.470 END TEST filesystem_xfs 00:11:18.470 ************************************ 00:11:18.470 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:18.730 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:18.730 04:27:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 207045 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 207045 ']' 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 207045 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207045 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207045' 00:11:19.670 killing process with pid 207045 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 207045 00:11:19.670 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 207045 00:11:20.239 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:20.239 00:11:20.239 real 0m7.842s 00:11:20.239 user 0m30.604s 00:11:20.239 sys 0m1.338s 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.240 ************************************ 00:11:20.240 END TEST nvmf_filesystem_no_in_capsule 00:11:20.240 ************************************ 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.240 ************************************ 00:11:20.240 START TEST nvmf_filesystem_in_capsule 00:11:20.240 ************************************ 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=208637 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 208637 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 208637 ']' 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.240 04:27:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.240 [2024-11-17 04:27:58.919695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:20.240 [2024-11-17 04:27:58.919745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.240 [2024-11-17 04:27:59.013553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.240 [2024-11-17 04:27:59.035856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.240 [2024-11-17 04:27:59.035894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.240 [2024-11-17 04:27:59.035903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.240 [2024-11-17 04:27:59.035912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.240 [2024-11-17 04:27:59.035918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.240 [2024-11-17 04:27:59.037640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.240 [2024-11-17 04:27:59.037752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.240 [2024-11-17 04:27:59.037862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.240 [2024-11-17 04:27:59.037864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.500 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.500 [2024-11-17 04:27:59.202538] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x627c50/0x62c100) succeed. 00:11:20.500 [2024-11-17 04:27:59.211655] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x629290/0x66d7a0) succeed. 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 [2024-11-17 04:27:59.486488] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:20.761 { 00:11:20.761 "name": "Malloc1", 00:11:20.761 "aliases": [ 00:11:20.761 "cbc298e0-3af6-4e0f-a13a-891d74b6b544" 00:11:20.761 ], 00:11:20.761 "product_name": "Malloc disk", 00:11:20.761 "block_size": 512, 00:11:20.761 "num_blocks": 1048576, 00:11:20.761 "uuid": "cbc298e0-3af6-4e0f-a13a-891d74b6b544", 00:11:20.761 "assigned_rate_limits": { 00:11:20.761 "rw_ios_per_sec": 0, 00:11:20.761 "rw_mbytes_per_sec": 0, 00:11:20.761 "r_mbytes_per_sec": 0, 00:11:20.761 "w_mbytes_per_sec": 0 00:11:20.761 }, 00:11:20.761 "claimed": true, 00:11:20.761 "claim_type": "exclusive_write", 00:11:20.761 "zoned": false, 00:11:20.761 "supported_io_types": { 00:11:20.761 "read": true, 00:11:20.761 "write": true, 00:11:20.761 "unmap": true, 00:11:20.761 "flush": true, 00:11:20.761 "reset": true, 00:11:20.761 "nvme_admin": false, 00:11:20.761 "nvme_io": false, 00:11:20.761 "nvme_io_md": false, 00:11:20.761 "write_zeroes": true, 00:11:20.761 "zcopy": true, 00:11:20.761 "get_zone_info": false, 00:11:20.761 "zone_management": false, 00:11:20.761 "zone_append": false, 00:11:20.761 "compare": false, 00:11:20.761 "compare_and_write": false, 00:11:20.761 "abort": true, 00:11:20.761 "seek_hole": false, 00:11:20.761 "seek_data": false, 00:11:20.761 "copy": true, 00:11:20.761 "nvme_iov_md": false 00:11:20.761 }, 00:11:20.761 "memory_domains": [ 00:11:20.761 { 00:11:20.761 "dma_device_id": "system", 00:11:20.761 "dma_device_type": 1 00:11:20.761 }, 00:11:20.761 { 00:11:20.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.761 "dma_device_type": 2 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "driver_specific": {} 00:11:20.761 } 00:11:20.761 ]' 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:20.761 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:21.021 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:21.021 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:21.021 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:21.021 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:21.021 04:27:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:21.959 04:28:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.959 04:28:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.959 04:28:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.959 04:28:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.959 04:28:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.870 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.130 04:28:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.070 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.071 ************************************ 00:11:25.071 START TEST filesystem_in_capsule_ext4 00:11:25.071 ************************************ 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:25.071 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:25.071 mke2fs 1.47.0 (5-Feb-2023) 00:11:25.331 Discarding device blocks: 0/522240 done 00:11:25.331 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:25.331 Filesystem UUID: f96b40ee-2a98-4cbf-abfa-45e1519a5ea0 00:11:25.331 Superblock backups stored on blocks: 00:11:25.331 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:25.331 00:11:25.331 Allocating group tables: 0/64 done 00:11:25.331 Writing inode tables: 0/64 done 00:11:25.331 Creating journal (8192 blocks): done 00:11:25.331 Writing superblocks and filesystem accounting information: 0/64 done 00:11:25.331 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:25.331 04:28:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 208637 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.331 00:11:25.331 real 0m0.199s 00:11:25.331 user 0m0.025s 00:11:25.331 sys 0m0.084s 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:25.331 ************************************ 00:11:25.331 END TEST filesystem_in_capsule_ext4 00:11:25.331 ************************************ 00:11:25.331 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.332 ************************************ 00:11:25.332 START TEST filesystem_in_capsule_btrfs 00:11:25.332 ************************************ 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:25.332 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:25.592 btrfs-progs v6.8.1 00:11:25.592 See https://btrfs.readthedocs.io for more information. 00:11:25.592 00:11:25.592 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:25.592 NOTE: several default settings have changed in version 5.15, please make sure 00:11:25.592 this does not affect your deployments: 00:11:25.592 - DUP for metadata (-m dup) 00:11:25.592 - enabled no-holes (-O no-holes) 00:11:25.592 - enabled free-space-tree (-R free-space-tree) 00:11:25.592 00:11:25.592 Label: (null) 00:11:25.592 UUID: 939e9a31-c18b-4ac8-a993-ae69dbf018b9 00:11:25.592 Node size: 16384 00:11:25.592 Sector size: 4096 (CPU page size: 4096) 00:11:25.592 Filesystem size: 510.00MiB 00:11:25.592 Block group profiles: 00:11:25.592 Data: single 8.00MiB 00:11:25.592 Metadata: DUP 32.00MiB 00:11:25.592 System: DUP 8.00MiB 00:11:25.592 SSD detected: yes 00:11:25.592 Zoned device: no 00:11:25.592 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:25.592 Checksum: crc32c 00:11:25.592 Number of devices: 1 00:11:25.592 Devices: 00:11:25.592 ID SIZE PATH 00:11:25.592 1 510.00MiB /dev/nvme0n1p1 00:11:25.592 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 208637 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.592 00:11:25.592 real 0m0.238s 00:11:25.592 user 0m0.025s 00:11:25.592 sys 0m0.122s 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.592 ************************************ 00:11:25.592 END TEST filesystem_in_capsule_btrfs 00:11:25.592 ************************************ 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.592 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.853 ************************************ 00:11:25.853 START TEST filesystem_in_capsule_xfs 00:11:25.853 ************************************ 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:25.853 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:25.853 = sectsz=512 attr=2, projid32bit=1 00:11:25.853 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:25.853 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:25.853 data = bsize=4096 blocks=130560, imaxpct=25 00:11:25.853 = sunit=0 swidth=0 blks 00:11:25.853 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:25.853 log =internal log bsize=4096 blocks=16384, version=2 00:11:25.853 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:25.853 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:25.853 Discarding blocks...Done. 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 208637 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.853 00:11:25.853 real 0m0.199s 00:11:25.853 user 0m0.026s 00:11:25.853 sys 0m0.075s 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.853 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.853 ************************************ 00:11:25.853 END TEST filesystem_in_capsule_xfs 00:11:25.853 ************************************ 00:11:26.114 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:26.114 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:26.114 04:28:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 208637 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 208637 ']' 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 208637 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208637 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208637' 00:11:27.055 killing process with pid 208637 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 208637 00:11:27.055 04:28:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 208637 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:27.625 00:11:27.625 real 0m7.331s 00:11:27.625 user 0m28.504s 00:11:27.625 sys 0m1.256s 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 ************************************ 00:11:27.625 END TEST nvmf_filesystem_in_capsule 00:11:27.625 ************************************ 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:27.625 rmmod nvme_rdma 00:11:27.625 rmmod nvme_fabrics 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:27.625 00:11:27.625 real 0m23.196s 00:11:27.625 user 1m1.551s 00:11:27.625 sys 0m8.427s 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 ************************************ 00:11:27.625 END TEST nvmf_filesystem 00:11:27.625 ************************************ 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 ************************************ 00:11:27.625 START TEST nvmf_target_discovery 00:11:27.625 ************************************ 00:11:27.625 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:27.887 * Looking for test storage... 00:11:27.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.887 --rc genhtml_branch_coverage=1 00:11:27.887 --rc genhtml_function_coverage=1 00:11:27.887 --rc genhtml_legend=1 00:11:27.887 --rc geninfo_all_blocks=1 00:11:27.887 --rc geninfo_unexecuted_blocks=1 00:11:27.887 00:11:27.887 ' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.887 --rc genhtml_branch_coverage=1 00:11:27.887 --rc genhtml_function_coverage=1 00:11:27.887 --rc genhtml_legend=1 00:11:27.887 --rc geninfo_all_blocks=1 00:11:27.887 --rc geninfo_unexecuted_blocks=1 00:11:27.887 00:11:27.887 ' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.887 --rc genhtml_branch_coverage=1 00:11:27.887 --rc genhtml_function_coverage=1 00:11:27.887 --rc genhtml_legend=1 00:11:27.887 --rc geninfo_all_blocks=1 00:11:27.887 --rc geninfo_unexecuted_blocks=1 00:11:27.887 00:11:27.887 ' 00:11:27.887 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.888 --rc genhtml_branch_coverage=1 00:11:27.888 --rc genhtml_function_coverage=1 00:11:27.888 --rc genhtml_legend=1 00:11:27.888 --rc geninfo_all_blocks=1 00:11:27.888 --rc geninfo_unexecuted_blocks=1 00:11:27.888 00:11:27.888 ' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.888 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.888 04:28:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.025 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:36.026 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:36.026 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:36.026 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:36.026 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:36.026 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.026 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:36.026 altname enp217s0f0np0 00:11:36.026 altname ens818f0np0 00:11:36.026 inet 192.168.100.8/24 scope global mlx_0_0 00:11:36.026 valid_lft forever preferred_lft forever 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:36.026 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:36.027 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.027 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:36.027 altname enp217s0f1np1 00:11:36.027 altname ens818f1np1 00:11:36.027 inet 192.168.100.9/24 scope global mlx_0_1 00:11:36.027 valid_lft forever preferred_lft forever 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:36.027 192.168.100.9' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:36.027 192.168.100.9' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:36.027 192.168.100.9' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=213513 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 213513 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 213513 ']' 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.027 04:28:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 [2024-11-17 04:28:13.943180] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:36.027 [2024-11-17 04:28:13.943231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.027 [2024-11-17 04:28:14.033675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.027 [2024-11-17 04:28:14.056215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.027 [2024-11-17 04:28:14.056253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.027 [2024-11-17 04:28:14.056264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.027 [2024-11-17 04:28:14.056272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.027 [2024-11-17 04:28:14.056278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.027 [2024-11-17 04:28:14.057897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.027 [2024-11-17 04:28:14.058019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.027 [2024-11-17 04:28:14.058055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.027 [2024-11-17 04:28:14.058057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 [2024-11-17 04:28:14.227744] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x917c50/0x91c100) succeed. 00:11:36.027 [2024-11-17 04:28:14.237055] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x919290/0x95d7a0) succeed. 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 Null1 00:11:36.027 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 [2024-11-17 04:28:14.422788] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 Null2 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 Null3 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 Null4 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.028 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:11:36.028 00:11:36.028 Discovery Log Number of Records 6, Generation counter 6 00:11:36.028 =====Discovery Log Entry 0====== 00:11:36.028 trtype: rdma 00:11:36.028 adrfam: ipv4 00:11:36.028 subtype: current discovery subsystem 00:11:36.028 treq: not required 00:11:36.028 portid: 0 00:11:36.028 trsvcid: 4420 00:11:36.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:36.028 traddr: 192.168.100.8 00:11:36.028 eflags: explicit discovery connections, duplicate discovery information 00:11:36.028 rdma_prtype: not specified 00:11:36.028 rdma_qptype: connected 00:11:36.028 rdma_cms: rdma-cm 00:11:36.028 rdma_pkey: 0x0000 00:11:36.028 =====Discovery Log Entry 1====== 00:11:36.028 trtype: rdma 00:11:36.028 adrfam: ipv4 00:11:36.028 subtype: nvme subsystem 00:11:36.028 treq: not required 00:11:36.028 portid: 0 00:11:36.028 trsvcid: 4420 00:11:36.028 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:36.028 traddr: 192.168.100.8 00:11:36.028 eflags: none 00:11:36.028 rdma_prtype: not specified 00:11:36.028 rdma_qptype: connected 00:11:36.028 rdma_cms: rdma-cm 00:11:36.028 rdma_pkey: 0x0000 00:11:36.028 =====Discovery Log Entry 2====== 00:11:36.028 trtype: rdma 00:11:36.028 adrfam: ipv4 00:11:36.028 subtype: nvme subsystem 00:11:36.028 treq: not required 00:11:36.028 portid: 0 00:11:36.028 trsvcid: 4420 00:11:36.028 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:36.028 traddr: 192.168.100.8 00:11:36.028 eflags: none 00:11:36.028 rdma_prtype: not specified 00:11:36.028 rdma_qptype: connected 00:11:36.028 rdma_cms: rdma-cm 00:11:36.028 rdma_pkey: 0x0000 00:11:36.028 =====Discovery Log Entry 3====== 00:11:36.028 trtype: rdma 00:11:36.028 adrfam: ipv4 00:11:36.028 subtype: nvme subsystem 00:11:36.028 treq: not required 00:11:36.028 portid: 0 00:11:36.028 trsvcid: 4420 00:11:36.028 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:36.028 traddr: 192.168.100.8 00:11:36.028 eflags: none 00:11:36.028 rdma_prtype: not specified 00:11:36.028 rdma_qptype: connected 00:11:36.028 rdma_cms: rdma-cm 00:11:36.028 rdma_pkey: 0x0000 00:11:36.029 =====Discovery Log Entry 4====== 00:11:36.029 trtype: rdma 00:11:36.029 adrfam: ipv4 00:11:36.029 subtype: nvme subsystem 00:11:36.029 treq: not required 00:11:36.029 portid: 0 00:11:36.029 trsvcid: 4420 00:11:36.029 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:36.029 traddr: 192.168.100.8 00:11:36.029 eflags: none 00:11:36.029 rdma_prtype: not specified 00:11:36.029 rdma_qptype: connected 00:11:36.029 rdma_cms: rdma-cm 00:11:36.029 rdma_pkey: 0x0000 00:11:36.029 =====Discovery Log Entry 5====== 00:11:36.029 trtype: rdma 00:11:36.029 adrfam: ipv4 00:11:36.029 subtype: discovery subsystem referral 00:11:36.029 treq: not required 00:11:36.029 portid: 0 00:11:36.029 trsvcid: 4430 00:11:36.029 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:36.029 traddr: 192.168.100.8 00:11:36.029 eflags: none 00:11:36.029 rdma_prtype: unrecognized 00:11:36.029 rdma_qptype: unrecognized 00:11:36.029 rdma_cms: unrecognized 00:11:36.029 rdma_pkey: 0x0000 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:36.029 Perform nvmf subsystem discovery via RPC 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 [ 00:11:36.029 { 00:11:36.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:36.029 "subtype": "Discovery", 00:11:36.029 "listen_addresses": [ 00:11:36.029 { 00:11:36.029 "trtype": "RDMA", 00:11:36.029 "adrfam": "IPv4", 00:11:36.029 "traddr": "192.168.100.8", 00:11:36.029 "trsvcid": "4420" 00:11:36.029 } 00:11:36.029 ], 00:11:36.029 "allow_any_host": true, 00:11:36.029 "hosts": [] 00:11:36.029 }, 00:11:36.029 { 00:11:36.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.029 "subtype": "NVMe", 00:11:36.029 "listen_addresses": [ 00:11:36.029 { 00:11:36.029 "trtype": "RDMA", 00:11:36.029 "adrfam": "IPv4", 00:11:36.029 "traddr": "192.168.100.8", 00:11:36.029 "trsvcid": "4420" 00:11:36.029 } 00:11:36.029 ], 00:11:36.029 "allow_any_host": true, 00:11:36.029 "hosts": [], 00:11:36.029 "serial_number": "SPDK00000000000001", 00:11:36.029 "model_number": "SPDK bdev Controller", 00:11:36.029 "max_namespaces": 32, 00:11:36.029 "min_cntlid": 1, 00:11:36.029 "max_cntlid": 65519, 00:11:36.029 "namespaces": [ 00:11:36.029 { 00:11:36.029 "nsid": 1, 00:11:36.029 "bdev_name": "Null1", 00:11:36.029 "name": "Null1", 00:11:36.029 "nguid": "48369B10474846FD9A247A463EE5F67E", 00:11:36.029 "uuid": "48369b10-4748-46fd-9a24-7a463ee5f67e" 00:11:36.029 } 00:11:36.029 ] 00:11:36.029 }, 00:11:36.029 { 00:11:36.029 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:36.029 "subtype": "NVMe", 00:11:36.029 "listen_addresses": [ 00:11:36.029 { 00:11:36.029 "trtype": "RDMA", 00:11:36.029 "adrfam": "IPv4", 00:11:36.029 "traddr": "192.168.100.8", 00:11:36.029 "trsvcid": "4420" 00:11:36.029 } 00:11:36.029 ], 00:11:36.029 "allow_any_host": true, 00:11:36.029 "hosts": [], 00:11:36.029 "serial_number": "SPDK00000000000002", 00:11:36.029 "model_number": "SPDK bdev Controller", 00:11:36.029 "max_namespaces": 32, 00:11:36.029 "min_cntlid": 1, 00:11:36.029 "max_cntlid": 65519, 00:11:36.029 "namespaces": [ 00:11:36.029 { 00:11:36.029 "nsid": 1, 00:11:36.029 "bdev_name": "Null2", 00:11:36.029 "name": "Null2", 00:11:36.029 "nguid": "D74EA3E47DFA48328044EA862D9CE746", 00:11:36.029 "uuid": "d74ea3e4-7dfa-4832-8044-ea862d9ce746" 00:11:36.029 } 00:11:36.029 ] 00:11:36.029 }, 00:11:36.029 { 00:11:36.029 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:36.029 "subtype": "NVMe", 00:11:36.029 "listen_addresses": [ 00:11:36.029 { 00:11:36.029 "trtype": "RDMA", 00:11:36.029 "adrfam": "IPv4", 00:11:36.029 "traddr": "192.168.100.8", 00:11:36.029 "trsvcid": "4420" 00:11:36.029 } 00:11:36.029 ], 00:11:36.029 "allow_any_host": true, 00:11:36.029 "hosts": [], 00:11:36.029 "serial_number": "SPDK00000000000003", 00:11:36.029 "model_number": "SPDK bdev Controller", 00:11:36.029 "max_namespaces": 32, 00:11:36.029 "min_cntlid": 1, 00:11:36.029 "max_cntlid": 65519, 00:11:36.029 "namespaces": [ 00:11:36.029 { 00:11:36.029 "nsid": 1, 00:11:36.029 "bdev_name": "Null3", 00:11:36.029 "name": "Null3", 00:11:36.029 "nguid": "6008151FE0FD449D8A6F87ABB789D9BC", 00:11:36.029 "uuid": "6008151f-e0fd-449d-8a6f-87abb789d9bc" 00:11:36.029 } 00:11:36.029 ] 00:11:36.029 }, 00:11:36.029 { 00:11:36.029 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:36.029 "subtype": "NVMe", 00:11:36.029 "listen_addresses": [ 00:11:36.029 { 00:11:36.029 "trtype": "RDMA", 00:11:36.029 "adrfam": "IPv4", 00:11:36.029 "traddr": "192.168.100.8", 00:11:36.029 "trsvcid": "4420" 00:11:36.029 } 00:11:36.029 ], 00:11:36.029 "allow_any_host": true, 00:11:36.029 "hosts": [], 00:11:36.029 "serial_number": "SPDK00000000000004", 00:11:36.029 "model_number": "SPDK bdev Controller", 00:11:36.029 "max_namespaces": 32, 00:11:36.029 "min_cntlid": 1, 00:11:36.029 "max_cntlid": 65519, 00:11:36.029 "namespaces": [ 00:11:36.029 { 00:11:36.029 "nsid": 1, 00:11:36.029 "bdev_name": "Null4", 00:11:36.029 "name": "Null4", 00:11:36.029 "nguid": "9A8E9A6418C34F0AA5EAD9B41A1B61F5", 00:11:36.029 "uuid": "9a8e9a64-18c3-4f0a-a5ea-d9b41a1b61f5" 00:11:36.029 } 00:11:36.029 ] 00:11:36.029 } 00:11:36.029 ] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:36.029 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:36.030 rmmod nvme_rdma 00:11:36.030 rmmod nvme_fabrics 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 213513 ']' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 213513 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 213513 ']' 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 213513 00:11:36.030 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213513 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213513' 00:11:36.290 killing process with pid 213513 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 213513 00:11:36.290 04:28:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 213513 00:11:36.562 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.562 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:36.563 00:11:36.563 real 0m8.778s 00:11:36.563 user 0m6.460s 00:11:36.563 sys 0m6.055s 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.563 ************************************ 00:11:36.563 END TEST nvmf_target_discovery 00:11:36.563 ************************************ 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.563 ************************************ 00:11:36.563 START TEST nvmf_referrals 00:11:36.563 ************************************ 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:36.563 * Looking for test storage... 00:11:36.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.563 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.825 --rc genhtml_branch_coverage=1 00:11:36.825 --rc genhtml_function_coverage=1 00:11:36.825 --rc genhtml_legend=1 00:11:36.825 --rc geninfo_all_blocks=1 00:11:36.825 --rc geninfo_unexecuted_blocks=1 00:11:36.825 00:11:36.825 ' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.825 --rc genhtml_branch_coverage=1 00:11:36.825 --rc genhtml_function_coverage=1 00:11:36.825 --rc genhtml_legend=1 00:11:36.825 --rc geninfo_all_blocks=1 00:11:36.825 --rc geninfo_unexecuted_blocks=1 00:11:36.825 00:11:36.825 ' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.825 --rc genhtml_branch_coverage=1 00:11:36.825 --rc genhtml_function_coverage=1 00:11:36.825 --rc genhtml_legend=1 00:11:36.825 --rc geninfo_all_blocks=1 00:11:36.825 --rc geninfo_unexecuted_blocks=1 00:11:36.825 00:11:36.825 ' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.825 --rc genhtml_branch_coverage=1 00:11:36.825 --rc genhtml_function_coverage=1 00:11:36.825 --rc genhtml_legend=1 00:11:36.825 --rc geninfo_all_blocks=1 00:11:36.825 --rc geninfo_unexecuted_blocks=1 00:11:36.825 00:11:36.825 ' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.825 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.825 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.826 04:28:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:44.966 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:44.967 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:44.967 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:44.967 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:44.967 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:44.967 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.967 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:44.967 altname enp217s0f0np0 00:11:44.967 altname ens818f0np0 00:11:44.967 inet 192.168.100.8/24 scope global mlx_0_0 00:11:44.967 valid_lft forever preferred_lft forever 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:44.967 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.967 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:44.967 altname enp217s0f1np1 00:11:44.967 altname ens818f1np1 00:11:44.967 inet 192.168.100.9/24 scope global mlx_0_1 00:11:44.967 valid_lft forever preferred_lft forever 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.967 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:44.968 192.168.100.9' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:44.968 192.168.100.9' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:44.968 192.168.100.9' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=217077 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 217077 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 217077 ']' 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.968 04:28:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 [2024-11-17 04:28:22.835923] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:44.968 [2024-11-17 04:28:22.835989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.968 [2024-11-17 04:28:22.926406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.968 [2024-11-17 04:28:22.949779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.968 [2024-11-17 04:28:22.949817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.968 [2024-11-17 04:28:22.949827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.968 [2024-11-17 04:28:22.949835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.968 [2024-11-17 04:28:22.949842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.968 [2024-11-17 04:28:22.951411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.968 [2024-11-17 04:28:22.951523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.968 [2024-11-17 04:28:22.951632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.968 [2024-11-17 04:28:22.951634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 [2024-11-17 04:28:23.123906] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21cac50/0x21cf100) succeed. 00:11:44.968 [2024-11-17 04:28:23.132984] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21cc290/0x22107a0) succeed. 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 [2024-11-17 04:28:23.276028] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.968 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.969 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.229 04:28:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.488 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.489 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:45.749 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:46.009 rmmod nvme_rdma 00:11:46.009 rmmod nvme_fabrics 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 217077 ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 217077 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 217077 ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 217077 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217077 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217077' 00:11:46.009 killing process with pid 217077 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 217077 00:11:46.009 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 217077 00:11:46.269 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.269 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:46.269 00:11:46.269 real 0m9.760s 00:11:46.269 user 0m10.878s 00:11:46.269 sys 0m6.502s 00:11:46.269 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.269 04:28:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 ************************************ 00:11:46.269 END TEST nvmf_referrals 00:11:46.269 ************************************ 00:11:46.269 04:28:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:46.269 04:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.269 04:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.269 04:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 ************************************ 00:11:46.269 START TEST nvmf_connect_disconnect 00:11:46.270 ************************************ 00:11:46.270 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:46.530 * Looking for test storage... 00:11:46.530 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.530 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.531 --rc genhtml_branch_coverage=1 00:11:46.531 --rc genhtml_function_coverage=1 00:11:46.531 --rc genhtml_legend=1 00:11:46.531 --rc geninfo_all_blocks=1 00:11:46.531 --rc geninfo_unexecuted_blocks=1 00:11:46.531 00:11:46.531 ' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.531 --rc genhtml_branch_coverage=1 00:11:46.531 --rc genhtml_function_coverage=1 00:11:46.531 --rc genhtml_legend=1 00:11:46.531 --rc geninfo_all_blocks=1 00:11:46.531 --rc geninfo_unexecuted_blocks=1 00:11:46.531 00:11:46.531 ' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.531 --rc genhtml_branch_coverage=1 00:11:46.531 --rc genhtml_function_coverage=1 00:11:46.531 --rc genhtml_legend=1 00:11:46.531 --rc geninfo_all_blocks=1 00:11:46.531 --rc geninfo_unexecuted_blocks=1 00:11:46.531 00:11:46.531 ' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.531 --rc genhtml_branch_coverage=1 00:11:46.531 --rc genhtml_function_coverage=1 00:11:46.531 --rc genhtml_legend=1 00:11:46.531 --rc geninfo_all_blocks=1 00:11:46.531 --rc geninfo_unexecuted_blocks=1 00:11:46.531 00:11:46.531 ' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.531 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.531 04:28:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:54.668 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:54.668 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:54.668 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:54.668 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:54.668 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:54.669 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:54.669 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:54.669 altname enp217s0f0np0 00:11:54.669 altname ens818f0np0 00:11:54.669 inet 192.168.100.8/24 scope global mlx_0_0 00:11:54.669 valid_lft forever preferred_lft forever 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:54.669 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:54.669 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:54.669 altname enp217s0f1np1 00:11:54.669 altname ens818f1np1 00:11:54.669 inet 192.168.100.9/24 scope global mlx_0_1 00:11:54.669 valid_lft forever preferred_lft forever 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:54.669 192.168.100.9' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:54.669 192.168.100.9' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:54.669 192.168.100.9' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.669 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=220998 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 220998 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 220998 ']' 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 [2024-11-17 04:28:32.621270] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:54.670 [2024-11-17 04:28:32.621321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.670 [2024-11-17 04:28:32.711739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.670 [2024-11-17 04:28:32.734227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.670 [2024-11-17 04:28:32.734266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.670 [2024-11-17 04:28:32.734275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.670 [2024-11-17 04:28:32.734283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.670 [2024-11-17 04:28:32.734307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.670 [2024-11-17 04:28:32.736022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.670 [2024-11-17 04:28:32.736132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.670 [2024-11-17 04:28:32.736243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.670 [2024-11-17 04:28:32.736244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 04:28:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 [2024-11-17 04:28:32.879917] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:54.670 [2024-11-17 04:28:32.901452] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b17c50/0x1b1c100) succeed. 00:11:54.670 [2024-11-17 04:28:32.910736] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b19290/0x1b5d7a0) succeed. 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 [2024-11-17 04:28:33.060667] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:54.670 04:28:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:57.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:09.872 rmmod nvme_rdma 00:17:09.872 rmmod nvme_fabrics 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 220998 ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 220998 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 220998 ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 220998 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220998 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220998' 00:17:09.872 killing process with pid 220998 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 220998 00:17:09.872 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 220998 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:10.132 00:17:10.132 real 5m23.811s 00:17:10.132 user 21m2.180s 00:17:10.132 sys 0m18.222s 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.132 ************************************ 00:17:10.132 END TEST nvmf_connect_disconnect 00:17:10.132 ************************************ 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.132 04:33:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.393 ************************************ 00:17:10.393 START TEST nvmf_multitarget 00:17:10.393 ************************************ 00:17:10.393 04:33:48 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:10.393 * Looking for test storage... 00:17:10.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.393 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.394 --rc genhtml_branch_coverage=1 00:17:10.394 --rc genhtml_function_coverage=1 00:17:10.394 --rc genhtml_legend=1 00:17:10.394 --rc geninfo_all_blocks=1 00:17:10.394 --rc geninfo_unexecuted_blocks=1 00:17:10.394 00:17:10.394 ' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.394 --rc genhtml_branch_coverage=1 00:17:10.394 --rc genhtml_function_coverage=1 00:17:10.394 --rc genhtml_legend=1 00:17:10.394 --rc geninfo_all_blocks=1 00:17:10.394 --rc geninfo_unexecuted_blocks=1 00:17:10.394 00:17:10.394 ' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.394 --rc genhtml_branch_coverage=1 00:17:10.394 --rc genhtml_function_coverage=1 00:17:10.394 --rc genhtml_legend=1 00:17:10.394 --rc geninfo_all_blocks=1 00:17:10.394 --rc geninfo_unexecuted_blocks=1 00:17:10.394 00:17:10.394 ' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.394 --rc genhtml_branch_coverage=1 00:17:10.394 --rc genhtml_function_coverage=1 00:17:10.394 --rc genhtml_legend=1 00:17:10.394 --rc geninfo_all_blocks=1 00:17:10.394 --rc geninfo_unexecuted_blocks=1 00:17:10.394 00:17:10.394 ' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.394 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.394 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.395 04:33:49 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:18.529 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:18.529 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:18.529 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:18.529 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:18.530 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:18.530 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:18.530 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:18.530 altname enp217s0f0np0 00:17:18.530 altname ens818f0np0 00:17:18.530 inet 192.168.100.8/24 scope global mlx_0_0 00:17:18.530 valid_lft forever preferred_lft forever 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:18.530 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:18.530 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:18.530 altname enp217s0f1np1 00:17:18.530 altname ens818f1np1 00:17:18.530 inet 192.168.100.9/24 scope global mlx_0_1 00:17:18.530 valid_lft forever preferred_lft forever 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:18.530 192.168.100.9' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:18.530 192.168.100.9' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:18.530 192.168.100.9' 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:17:18.530 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=280091 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 280091 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 280091 ']' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:18.531 [2024-11-17 04:33:56.510088] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:18.531 [2024-11-17 04:33:56.510141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.531 [2024-11-17 04:33:56.600649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.531 [2024-11-17 04:33:56.623086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.531 [2024-11-17 04:33:56.623126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.531 [2024-11-17 04:33:56.623136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.531 [2024-11-17 04:33:56.623144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.531 [2024-11-17 04:33:56.623151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.531 [2024-11-17 04:33:56.624718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.531 [2024-11-17 04:33:56.624837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.531 [2024-11-17 04:33:56.624963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.531 [2024-11-17 04:33:56.624962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:18.531 "nvmf_tgt_1" 00:17:18.531 04:33:56 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:18.531 "nvmf_tgt_2" 00:17:18.531 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:18.531 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:18.531 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:18.531 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:18.531 true 00:17:18.531 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:18.791 true 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:18.791 rmmod nvme_rdma 00:17:18.791 rmmod nvme_fabrics 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 280091 ']' 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 280091 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 280091 ']' 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 280091 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:18.791 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280091 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280091' 00:17:18.792 killing process with pid 280091 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 280091 00:17:18.792 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 280091 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:19.052 00:17:19.052 real 0m8.774s 00:17:19.052 user 0m7.357s 00:17:19.052 sys 0m5.996s 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:19.052 ************************************ 00:17:19.052 END TEST nvmf_multitarget 00:17:19.052 ************************************ 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.052 ************************************ 00:17:19.052 START TEST nvmf_rpc 00:17:19.052 ************************************ 00:17:19.052 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:19.313 * Looking for test storage... 00:17:19.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:19.313 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.313 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.313 04:33:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.313 --rc genhtml_branch_coverage=1 00:17:19.313 --rc genhtml_function_coverage=1 00:17:19.313 --rc genhtml_legend=1 00:17:19.313 --rc geninfo_all_blocks=1 00:17:19.313 --rc geninfo_unexecuted_blocks=1 00:17:19.313 00:17:19.313 ' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.313 --rc genhtml_branch_coverage=1 00:17:19.313 --rc genhtml_function_coverage=1 00:17:19.313 --rc genhtml_legend=1 00:17:19.313 --rc geninfo_all_blocks=1 00:17:19.313 --rc geninfo_unexecuted_blocks=1 00:17:19.313 00:17:19.313 ' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.313 --rc genhtml_branch_coverage=1 00:17:19.313 --rc genhtml_function_coverage=1 00:17:19.313 --rc genhtml_legend=1 00:17:19.313 --rc geninfo_all_blocks=1 00:17:19.313 --rc geninfo_unexecuted_blocks=1 00:17:19.313 00:17:19.313 ' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.313 --rc genhtml_branch_coverage=1 00:17:19.313 --rc genhtml_function_coverage=1 00:17:19.313 --rc genhtml_legend=1 00:17:19.313 --rc geninfo_all_blocks=1 00:17:19.313 --rc geninfo_unexecuted_blocks=1 00:17:19.313 00:17:19.313 ' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.313 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.314 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.314 04:33:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.450 04:34:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:27.450 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:27.450 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:27.450 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:27.450 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:27.450 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:27.451 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:27.451 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:27.451 altname enp217s0f0np0 00:17:27.451 altname ens818f0np0 00:17:27.451 inet 192.168.100.8/24 scope global mlx_0_0 00:17:27.451 valid_lft forever preferred_lft forever 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:27.451 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:27.451 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:27.451 altname enp217s0f1np1 00:17:27.451 altname ens818f1np1 00:17:27.451 inet 192.168.100.9/24 scope global mlx_0_1 00:17:27.451 valid_lft forever preferred_lft forever 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:27.451 192.168.100.9' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:27.451 192.168.100.9' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:27.451 192.168.100.9' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=283639 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 283639 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 283639 ']' 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.451 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.451 [2024-11-17 04:34:05.331275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:27.451 [2024-11-17 04:34:05.331330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.451 [2024-11-17 04:34:05.424193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.451 [2024-11-17 04:34:05.446791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.452 [2024-11-17 04:34:05.446830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.452 [2024-11-17 04:34:05.446843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.452 [2024-11-17 04:34:05.446851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.452 [2024-11-17 04:34:05.446857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.452 [2024-11-17 04:34:05.448531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.452 [2024-11-17 04:34:05.448629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.452 [2024-11-17 04:34:05.448740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.452 [2024-11-17 04:34:05.448741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:27.452 "tick_rate": 2500000000, 00:17:27.452 "poll_groups": [ 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_000", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [] 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_001", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [] 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_002", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [] 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_003", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [] 00:17:27.452 } 00:17:27.452 ] 00:17:27.452 }' 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 [2024-11-17 04:34:05.727034] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b8ccb0/0x1b91160) succeed. 00:17:27.452 [2024-11-17 04:34:05.736307] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b8e2f0/0x1bd2800) succeed. 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.452 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:27.452 "tick_rate": 2500000000, 00:17:27.452 "poll_groups": [ 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_000", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [ 00:17:27.452 { 00:17:27.452 "trtype": "RDMA", 00:17:27.452 "pending_data_buffer": 0, 00:17:27.452 "devices": [ 00:17:27.452 { 00:17:27.452 "name": "mlx5_0", 00:17:27.452 "polls": 15825, 00:17:27.452 "idle_polls": 15825, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.452 "recv_doorbell_updates": 1 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "mlx5_1", 00:17:27.452 "polls": 15825, 00:17:27.452 "idle_polls": 15825, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.452 "recv_doorbell_updates": 1 00:17:27.452 } 00:17:27.452 ] 00:17:27.452 } 00:17:27.452 ] 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_001", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [ 00:17:27.452 { 00:17:27.452 "trtype": "RDMA", 00:17:27.452 "pending_data_buffer": 0, 00:17:27.452 "devices": [ 00:17:27.452 { 00:17:27.452 "name": "mlx5_0", 00:17:27.452 "polls": 9817, 00:17:27.452 "idle_polls": 9817, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.452 "recv_doorbell_updates": 1 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "mlx5_1", 00:17:27.452 "polls": 9817, 00:17:27.452 "idle_polls": 9817, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.452 "recv_doorbell_updates": 1 00:17:27.452 } 00:17:27.452 ] 00:17:27.452 } 00:17:27.452 ] 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "nvmf_tgt_poll_group_002", 00:17:27.452 "admin_qpairs": 0, 00:17:27.452 "io_qpairs": 0, 00:17:27.452 "current_admin_qpairs": 0, 00:17:27.452 "current_io_qpairs": 0, 00:17:27.452 "pending_bdev_io": 0, 00:17:27.452 "completed_nvme_io": 0, 00:17:27.452 "transports": [ 00:17:27.452 { 00:17:27.452 "trtype": "RDMA", 00:17:27.452 "pending_data_buffer": 0, 00:17:27.452 "devices": [ 00:17:27.452 { 00:17:27.452 "name": "mlx5_0", 00:17:27.452 "polls": 5629, 00:17:27.452 "idle_polls": 5629, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.452 "recv_doorbell_updates": 1 00:17:27.452 }, 00:17:27.452 { 00:17:27.452 "name": "mlx5_1", 00:17:27.452 "polls": 5629, 00:17:27.452 "idle_polls": 5629, 00:17:27.452 "completions": 0, 00:17:27.452 "requests": 0, 00:17:27.452 "request_latency": 0, 00:17:27.452 "pending_free_request": 0, 00:17:27.452 "pending_rdma_read": 0, 00:17:27.452 "pending_rdma_write": 0, 00:17:27.452 "pending_rdma_send": 0, 00:17:27.452 "total_send_wrs": 0, 00:17:27.452 "send_doorbell_updates": 0, 00:17:27.452 "total_recv_wrs": 4096, 00:17:27.453 "recv_doorbell_updates": 1 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 }, 00:17:27.453 { 00:17:27.453 "name": "nvmf_tgt_poll_group_003", 00:17:27.453 "admin_qpairs": 0, 00:17:27.453 "io_qpairs": 0, 00:17:27.453 "current_admin_qpairs": 0, 00:17:27.453 "current_io_qpairs": 0, 00:17:27.453 "pending_bdev_io": 0, 00:17:27.453 "completed_nvme_io": 0, 00:17:27.453 "transports": [ 00:17:27.453 { 00:17:27.453 "trtype": "RDMA", 00:17:27.453 "pending_data_buffer": 0, 00:17:27.453 "devices": [ 00:17:27.453 { 00:17:27.453 "name": "mlx5_0", 00:17:27.453 "polls": 915, 00:17:27.453 "idle_polls": 915, 00:17:27.453 "completions": 0, 00:17:27.453 "requests": 0, 00:17:27.453 "request_latency": 0, 00:17:27.453 "pending_free_request": 0, 00:17:27.453 "pending_rdma_read": 0, 00:17:27.453 "pending_rdma_write": 0, 00:17:27.453 "pending_rdma_send": 0, 00:17:27.453 "total_send_wrs": 0, 00:17:27.453 "send_doorbell_updates": 0, 00:17:27.453 "total_recv_wrs": 4096, 00:17:27.453 "recv_doorbell_updates": 1 00:17:27.453 }, 00:17:27.453 { 00:17:27.453 "name": "mlx5_1", 00:17:27.453 "polls": 915, 00:17:27.453 "idle_polls": 915, 00:17:27.453 "completions": 0, 00:17:27.453 "requests": 0, 00:17:27.453 "request_latency": 0, 00:17:27.453 "pending_free_request": 0, 00:17:27.453 "pending_rdma_read": 0, 00:17:27.453 "pending_rdma_write": 0, 00:17:27.453 "pending_rdma_send": 0, 00:17:27.453 "total_send_wrs": 0, 00:17:27.453 "send_doorbell_updates": 0, 00:17:27.453 "total_recv_wrs": 4096, 00:17:27.453 "recv_doorbell_updates": 1 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 }' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:27.453 04:34:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 Malloc1 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 [2024-11-17 04:34:06.197076] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:27.453 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:27.453 [2024-11-17 04:34:06.243224] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:17:27.714 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:27.714 could not add new controller: failed to write to nvme-fabrics device 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.714 04:34:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:28.655 04:34:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.655 04:34:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.655 04:34:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.655 04:34:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.655 04:34:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:30.568 04:34:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:31.507 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:31.508 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:31.508 [2024-11-17 04:34:10.325251] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:17:31.768 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:31.768 could not add new controller: failed to write to nvme-fabrics device 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.768 04:34:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:32.708 04:34:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:32.708 04:34:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.708 04:34:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.708 04:34:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:32.708 04:34:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:34.617 04:34:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.557 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 [2024-11-17 04:34:14.386069] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.817 04:34:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:36.756 04:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:36.756 04:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:36.756 04:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.756 04:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:36.756 04:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.664 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:38.665 04:34:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 [2024-11-17 04:34:18.414335] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.862 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.862 04:34:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:40.823 04:34:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.823 04:34:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.823 04:34:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.823 04:34:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:40.823 04:34:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:42.731 04:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 [2024-11-17 04:34:22.435539] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.672 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.673 04:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:45.055 04:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.055 04:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.055 04:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.055 04:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:45.055 04:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:46.963 04:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:47.903 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 [2024-11-17 04:34:26.457229] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.904 04:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:48.843 04:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.843 04:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.843 04:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.843 04:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:48.843 04:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:50.751 04:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.690 [2024-11-17 04:34:30.509654] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.950 04:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:52.889 04:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.889 04:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.889 04:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.889 04:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.889 04:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.800 04:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 [2024-11-17 04:34:34.548298] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.740 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 [2024-11-17 04:34:34.596476] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 [2024-11-17 04:34:34.644680] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 [2024-11-17 04:34:34.692854] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.001 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 [2024-11-17 04:34:34.741026] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.002 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:56.002 "tick_rate": 2500000000, 00:17:56.002 "poll_groups": [ 00:17:56.002 { 00:17:56.002 "name": "nvmf_tgt_poll_group_000", 00:17:56.002 "admin_qpairs": 2, 00:17:56.002 "io_qpairs": 27, 00:17:56.002 "current_admin_qpairs": 0, 00:17:56.002 "current_io_qpairs": 0, 00:17:56.002 "pending_bdev_io": 0, 00:17:56.002 "completed_nvme_io": 127, 00:17:56.002 "transports": [ 00:17:56.002 { 00:17:56.002 "trtype": "RDMA", 00:17:56.002 "pending_data_buffer": 0, 00:17:56.002 "devices": [ 00:17:56.002 { 00:17:56.002 "name": "mlx5_0", 00:17:56.002 "polls": 3556574, 00:17:56.002 "idle_polls": 3556252, 00:17:56.002 "completions": 363, 00:17:56.002 "requests": 181, 00:17:56.002 "request_latency": 35938648, 00:17:56.002 "pending_free_request": 0, 00:17:56.002 "pending_rdma_read": 0, 00:17:56.002 "pending_rdma_write": 0, 00:17:56.002 "pending_rdma_send": 0, 00:17:56.002 "total_send_wrs": 307, 00:17:56.002 "send_doorbell_updates": 158, 00:17:56.002 "total_recv_wrs": 4277, 00:17:56.002 "recv_doorbell_updates": 158 00:17:56.002 }, 00:17:56.002 { 00:17:56.002 "name": "mlx5_1", 00:17:56.002 "polls": 3556574, 00:17:56.002 "idle_polls": 3556574, 00:17:56.002 "completions": 0, 00:17:56.002 "requests": 0, 00:17:56.002 "request_latency": 0, 00:17:56.002 "pending_free_request": 0, 00:17:56.002 "pending_rdma_read": 0, 00:17:56.002 "pending_rdma_write": 0, 00:17:56.002 "pending_rdma_send": 0, 00:17:56.002 "total_send_wrs": 0, 00:17:56.002 "send_doorbell_updates": 0, 00:17:56.002 "total_recv_wrs": 4096, 00:17:56.002 "recv_doorbell_updates": 1 00:17:56.002 } 00:17:56.002 ] 00:17:56.002 } 00:17:56.002 ] 00:17:56.002 }, 00:17:56.002 { 00:17:56.002 "name": "nvmf_tgt_poll_group_001", 00:17:56.002 "admin_qpairs": 2, 00:17:56.002 "io_qpairs": 26, 00:17:56.002 "current_admin_qpairs": 0, 00:17:56.002 "current_io_qpairs": 0, 00:17:56.002 "pending_bdev_io": 0, 00:17:56.002 "completed_nvme_io": 126, 00:17:56.002 "transports": [ 00:17:56.002 { 00:17:56.002 "trtype": "RDMA", 00:17:56.002 "pending_data_buffer": 0, 00:17:56.002 "devices": [ 00:17:56.002 { 00:17:56.002 "name": "mlx5_0", 00:17:56.002 "polls": 3518040, 00:17:56.002 "idle_polls": 3517723, 00:17:56.002 "completions": 356, 00:17:56.002 "requests": 178, 00:17:56.002 "request_latency": 36166124, 00:17:56.002 "pending_free_request": 0, 00:17:56.002 "pending_rdma_read": 0, 00:17:56.002 "pending_rdma_write": 0, 00:17:56.002 "pending_rdma_send": 0, 00:17:56.002 "total_send_wrs": 302, 00:17:56.002 "send_doorbell_updates": 154, 00:17:56.002 "total_recv_wrs": 4274, 00:17:56.002 "recv_doorbell_updates": 155 00:17:56.002 }, 00:17:56.002 { 00:17:56.002 "name": "mlx5_1", 00:17:56.002 "polls": 3518040, 00:17:56.002 "idle_polls": 3518040, 00:17:56.002 "completions": 0, 00:17:56.002 "requests": 0, 00:17:56.002 "request_latency": 0, 00:17:56.002 "pending_free_request": 0, 00:17:56.002 "pending_rdma_read": 0, 00:17:56.002 "pending_rdma_write": 0, 00:17:56.002 "pending_rdma_send": 0, 00:17:56.002 "total_send_wrs": 0, 00:17:56.002 "send_doorbell_updates": 0, 00:17:56.002 "total_recv_wrs": 4096, 00:17:56.002 "recv_doorbell_updates": 1 00:17:56.002 } 00:17:56.002 ] 00:17:56.002 } 00:17:56.002 ] 00:17:56.002 }, 00:17:56.002 { 00:17:56.002 "name": "nvmf_tgt_poll_group_002", 00:17:56.002 "admin_qpairs": 1, 00:17:56.002 "io_qpairs": 26, 00:17:56.002 "current_admin_qpairs": 0, 00:17:56.002 "current_io_qpairs": 0, 00:17:56.002 "pending_bdev_io": 0, 00:17:56.002 "completed_nvme_io": 92, 00:17:56.002 "transports": [ 00:17:56.002 { 00:17:56.002 "trtype": "RDMA", 00:17:56.002 "pending_data_buffer": 0, 00:17:56.002 "devices": [ 00:17:56.002 { 00:17:56.002 "name": "mlx5_0", 00:17:56.002 "polls": 3589345, 00:17:56.002 "idle_polls": 3589142, 00:17:56.002 "completions": 239, 00:17:56.002 "requests": 119, 00:17:56.002 "request_latency": 26209784, 00:17:56.002 "pending_free_request": 0, 00:17:56.002 "pending_rdma_read": 0, 00:17:56.002 "pending_rdma_write": 0, 00:17:56.002 "pending_rdma_send": 0, 00:17:56.002 "total_send_wrs": 198, 00:17:56.002 "send_doorbell_updates": 101, 00:17:56.002 "total_recv_wrs": 4215, 00:17:56.002 "recv_doorbell_updates": 101 00:17:56.002 }, 00:17:56.002 { 00:17:56.002 "name": "mlx5_1", 00:17:56.002 "polls": 3589345, 00:17:56.002 "idle_polls": 3589345, 00:17:56.002 "completions": 0, 00:17:56.002 "requests": 0, 00:17:56.003 "request_latency": 0, 00:17:56.003 "pending_free_request": 0, 00:17:56.003 "pending_rdma_read": 0, 00:17:56.003 "pending_rdma_write": 0, 00:17:56.003 "pending_rdma_send": 0, 00:17:56.003 "total_send_wrs": 0, 00:17:56.003 "send_doorbell_updates": 0, 00:17:56.003 "total_recv_wrs": 4096, 00:17:56.003 "recv_doorbell_updates": 1 00:17:56.003 } 00:17:56.003 ] 00:17:56.003 } 00:17:56.003 ] 00:17:56.003 }, 00:17:56.003 { 00:17:56.003 "name": "nvmf_tgt_poll_group_003", 00:17:56.003 "admin_qpairs": 2, 00:17:56.003 "io_qpairs": 26, 00:17:56.003 "current_admin_qpairs": 0, 00:17:56.003 "current_io_qpairs": 0, 00:17:56.003 "pending_bdev_io": 0, 00:17:56.003 "completed_nvme_io": 110, 00:17:56.003 "transports": [ 00:17:56.003 { 00:17:56.003 "trtype": "RDMA", 00:17:56.003 "pending_data_buffer": 0, 00:17:56.003 "devices": [ 00:17:56.003 { 00:17:56.003 "name": "mlx5_0", 00:17:56.003 "polls": 2848475, 00:17:56.003 "idle_polls": 2848175, 00:17:56.003 "completions": 326, 00:17:56.003 "requests": 163, 00:17:56.003 "request_latency": 28523342, 00:17:56.003 "pending_free_request": 0, 00:17:56.003 "pending_rdma_read": 0, 00:17:56.003 "pending_rdma_write": 0, 00:17:56.003 "pending_rdma_send": 0, 00:17:56.003 "total_send_wrs": 272, 00:17:56.003 "send_doorbell_updates": 149, 00:17:56.003 "total_recv_wrs": 4259, 00:17:56.003 "recv_doorbell_updates": 150 00:17:56.003 }, 00:17:56.003 { 00:17:56.003 "name": "mlx5_1", 00:17:56.003 "polls": 2848475, 00:17:56.003 "idle_polls": 2848475, 00:17:56.003 "completions": 0, 00:17:56.003 "requests": 0, 00:17:56.003 "request_latency": 0, 00:17:56.003 "pending_free_request": 0, 00:17:56.003 "pending_rdma_read": 0, 00:17:56.003 "pending_rdma_write": 0, 00:17:56.003 "pending_rdma_send": 0, 00:17:56.003 "total_send_wrs": 0, 00:17:56.003 "send_doorbell_updates": 0, 00:17:56.003 "total_recv_wrs": 4096, 00:17:56.003 "recv_doorbell_updates": 1 00:17:56.003 } 00:17:56.003 ] 00:17:56.003 } 00:17:56.003 ] 00:17:56.003 } 00:17:56.003 ] 00:17:56.003 }' 00:17:56.003 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:56.003 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1284 > 0 )) 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:17:56.263 04:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 126837898 > 0 )) 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:56.263 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:56.264 rmmod nvme_rdma 00:17:56.264 rmmod nvme_fabrics 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 283639 ']' 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 283639 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 283639 ']' 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 283639 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.264 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283639 00:17:56.523 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.523 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.523 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283639' 00:17:56.523 killing process with pid 283639 00:17:56.523 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 283639 00:17:56.524 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 283639 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:56.783 00:17:56.783 real 0m37.543s 00:17:56.783 user 2m1.613s 00:17:56.783 sys 0m7.279s 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.783 ************************************ 00:17:56.783 END TEST nvmf_rpc 00:17:56.783 ************************************ 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:56.783 ************************************ 00:17:56.783 START TEST nvmf_invalid 00:17:56.783 ************************************ 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:56.783 * Looking for test storage... 00:17:56.783 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:56.783 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.044 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.045 --rc genhtml_branch_coverage=1 00:17:57.045 --rc genhtml_function_coverage=1 00:17:57.045 --rc genhtml_legend=1 00:17:57.045 --rc geninfo_all_blocks=1 00:17:57.045 --rc geninfo_unexecuted_blocks=1 00:17:57.045 00:17:57.045 ' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.045 --rc genhtml_branch_coverage=1 00:17:57.045 --rc genhtml_function_coverage=1 00:17:57.045 --rc genhtml_legend=1 00:17:57.045 --rc geninfo_all_blocks=1 00:17:57.045 --rc geninfo_unexecuted_blocks=1 00:17:57.045 00:17:57.045 ' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.045 --rc genhtml_branch_coverage=1 00:17:57.045 --rc genhtml_function_coverage=1 00:17:57.045 --rc genhtml_legend=1 00:17:57.045 --rc geninfo_all_blocks=1 00:17:57.045 --rc geninfo_unexecuted_blocks=1 00:17:57.045 00:17:57.045 ' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.045 --rc genhtml_branch_coverage=1 00:17:57.045 --rc genhtml_function_coverage=1 00:17:57.045 --rc genhtml_legend=1 00:17:57.045 --rc geninfo_all_blocks=1 00:17:57.045 --rc geninfo_unexecuted_blocks=1 00:17:57.045 00:17:57.045 ' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.045 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:57.045 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.046 04:34:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.184 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:05.185 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:05.185 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:05.185 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:05.185 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:05.185 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.185 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:05.185 altname enp217s0f0np0 00:18:05.185 altname ens818f0np0 00:18:05.185 inet 192.168.100.8/24 scope global mlx_0_0 00:18:05.185 valid_lft forever preferred_lft forever 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:05.185 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:05.186 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.186 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:05.186 altname enp217s0f1np1 00:18:05.186 altname ens818f1np1 00:18:05.186 inet 192.168.100.9/24 scope global mlx_0_1 00:18:05.186 valid_lft forever preferred_lft forever 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:05.186 192.168.100.9' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:05.186 192.168.100.9' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:05.186 192.168.100.9' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=292263 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 292263 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 292263 ']' 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.186 04:34:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.186 [2024-11-17 04:34:43.019317] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:05.186 [2024-11-17 04:34:43.019377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.186 [2024-11-17 04:34:43.108504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.186 [2024-11-17 04:34:43.130711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.186 [2024-11-17 04:34:43.130751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.186 [2024-11-17 04:34:43.130760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.186 [2024-11-17 04:34:43.130768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.186 [2024-11-17 04:34:43.130791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.186 [2024-11-17 04:34:43.132366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.186 [2024-11-17 04:34:43.132477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.186 [2024-11-17 04:34:43.132564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.186 [2024-11-17 04:34:43.132563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1615 00:18:05.186 [2024-11-17 04:34:43.448502] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:05.186 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:05.186 { 00:18:05.186 "nqn": "nqn.2016-06.io.spdk:cnode1615", 00:18:05.186 "tgt_name": "foobar", 00:18:05.186 "method": "nvmf_create_subsystem", 00:18:05.186 "req_id": 1 00:18:05.186 } 00:18:05.186 Got JSON-RPC error response 00:18:05.186 response: 00:18:05.186 { 00:18:05.187 "code": -32603, 00:18:05.187 "message": "Unable to find target foobar" 00:18:05.187 }' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:05.187 { 00:18:05.187 "nqn": "nqn.2016-06.io.spdk:cnode1615", 00:18:05.187 "tgt_name": "foobar", 00:18:05.187 "method": "nvmf_create_subsystem", 00:18:05.187 "req_id": 1 00:18:05.187 } 00:18:05.187 Got JSON-RPC error response 00:18:05.187 response: 00:18:05.187 { 00:18:05.187 "code": -32603, 00:18:05.187 "message": "Unable to find target foobar" 00:18:05.187 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2482 00:18:05.187 [2024-11-17 04:34:43.649213] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2482: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:05.187 { 00:18:05.187 "nqn": "nqn.2016-06.io.spdk:cnode2482", 00:18:05.187 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:05.187 "method": "nvmf_create_subsystem", 00:18:05.187 "req_id": 1 00:18:05.187 } 00:18:05.187 Got JSON-RPC error response 00:18:05.187 response: 00:18:05.187 { 00:18:05.187 "code": -32602, 00:18:05.187 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:05.187 }' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:05.187 { 00:18:05.187 "nqn": "nqn.2016-06.io.spdk:cnode2482", 00:18:05.187 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:05.187 "method": "nvmf_create_subsystem", 00:18:05.187 "req_id": 1 00:18:05.187 } 00:18:05.187 Got JSON-RPC error response 00:18:05.187 response: 00:18:05.187 { 00:18:05.187 "code": -32602, 00:18:05.187 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:05.187 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32461 00:18:05.187 [2024-11-17 04:34:43.849879] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32461: invalid model number 'SPDK_Controller' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:05.187 { 00:18:05.187 "nqn": "nqn.2016-06.io.spdk:cnode32461", 00:18:05.187 "model_number": "SPDK_Controller\u001f", 00:18:05.187 "method": "nvmf_create_subsystem", 00:18:05.187 "req_id": 1 00:18:05.187 } 00:18:05.187 Got JSON-RPC error response 00:18:05.187 response: 00:18:05.187 { 00:18:05.187 "code": -32602, 00:18:05.187 "message": "Invalid MN SPDK_Controller\u001f" 00:18:05.187 }' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:05.187 { 00:18:05.187 "nqn": "nqn.2016-06.io.spdk:cnode32461", 00:18:05.187 "model_number": "SPDK_Controller\u001f", 00:18:05.187 "method": "nvmf_create_subsystem", 00:18:05.187 "req_id": 1 00:18:05.187 } 00:18:05.187 Got JSON-RPC error response 00:18:05.187 response: 00:18:05.187 { 00:18:05.187 "code": -32602, 00:18:05.187 "message": "Invalid MN SPDK_Controller\u001f" 00:18:05.187 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:05.187 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:05.188 04:34:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.188 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.448 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.448 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.448 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.448 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.448 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n&L)]D3uJ;c)"t3)D)kX' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'n&L)]D3uJ;c)"t3)D)kX' nqn.2016-06.io.spdk:cnode3317 00:18:05.449 [2024-11-17 04:34:44.227200] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3317: invalid serial number 'n&L)]D3uJ;c)"t3)D)kX' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:05.449 { 00:18:05.449 "nqn": "nqn.2016-06.io.spdk:cnode3317", 00:18:05.449 "serial_number": "n&L)]D3uJ;c)\u007f\"t3)D)kX", 00:18:05.449 "method": "nvmf_create_subsystem", 00:18:05.449 "req_id": 1 00:18:05.449 } 00:18:05.449 Got JSON-RPC error response 00:18:05.449 response: 00:18:05.449 { 00:18:05.449 "code": -32602, 00:18:05.449 "message": "Invalid SN n&L)]D3uJ;c)\u007f\"t3)D)kX" 00:18:05.449 }' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:05.449 { 00:18:05.449 "nqn": "nqn.2016-06.io.spdk:cnode3317", 00:18:05.449 "serial_number": "n&L)]D3uJ;c)\u007f\"t3)D)kX", 00:18:05.449 "method": "nvmf_create_subsystem", 00:18:05.449 "req_id": 1 00:18:05.449 } 00:18:05.449 Got JSON-RPC error response 00:18:05.449 response: 00:18:05.449 { 00:18:05.449 "code": -32602, 00:18:05.449 "message": "Invalid SN n&L)]D3uJ;c)\u007f\"t3)D)kX" 00:18:05.449 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.449 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:05.710 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.711 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~' nqn.2016-06.io.spdk:cnode22922 00:18:05.972 [2024-11-17 04:34:44.756894] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22922: invalid model number ';Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:05.972 { 00:18:05.972 "nqn": "nqn.2016-06.io.spdk:cnode22922", 00:18:05.972 "model_number": ";Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~", 00:18:05.972 "method": "nvmf_create_subsystem", 00:18:05.972 "req_id": 1 00:18:05.972 } 00:18:05.972 Got JSON-RPC error response 00:18:05.972 response: 00:18:05.972 { 00:18:05.972 "code": -32602, 00:18:05.972 "message": "Invalid MN ;Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~" 00:18:05.972 }' 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:05.972 { 00:18:05.972 "nqn": "nqn.2016-06.io.spdk:cnode22922", 00:18:05.972 "model_number": ";Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~", 00:18:05.972 "method": "nvmf_create_subsystem", 00:18:05.972 "req_id": 1 00:18:05.972 } 00:18:05.972 Got JSON-RPC error response 00:18:05.972 response: 00:18:05.972 { 00:18:05.972 "code": -32602, 00:18:05.972 "message": "Invalid MN ;Wd{iU;mrv)B%[xBTCCe#/z(d8MX>z)jB=nnu@5=~" 00:18:05.972 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:05.972 04:34:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:18:06.232 [2024-11-17 04:34:44.972580] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x110f860/0x1113d30) succeed. 00:18:06.232 [2024-11-17 04:34:44.981801] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1110ea0/0x11553d0) succeed. 00:18:06.492 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:18:06.752 192.168.100.9' 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:18:06.752 [2024-11-17 04:34:45.508174] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:06.752 { 00:18:06.752 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:06.752 "listen_address": { 00:18:06.752 "trtype": "rdma", 00:18:06.752 "traddr": "192.168.100.8", 00:18:06.752 "trsvcid": "4421" 00:18:06.752 }, 00:18:06.752 "method": "nvmf_subsystem_remove_listener", 00:18:06.752 "req_id": 1 00:18:06.752 } 00:18:06.752 Got JSON-RPC error response 00:18:06.752 response: 00:18:06.752 { 00:18:06.752 "code": -32602, 00:18:06.752 "message": "Invalid parameters" 00:18:06.752 }' 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:06.752 { 00:18:06.752 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:06.752 "listen_address": { 00:18:06.752 "trtype": "rdma", 00:18:06.752 "traddr": "192.168.100.8", 00:18:06.752 "trsvcid": "4421" 00:18:06.752 }, 00:18:06.752 "method": "nvmf_subsystem_remove_listener", 00:18:06.752 "req_id": 1 00:18:06.752 } 00:18:06.752 Got JSON-RPC error response 00:18:06.752 response: 00:18:06.752 { 00:18:06.752 "code": -32602, 00:18:06.752 "message": "Invalid parameters" 00:18:06.752 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:06.752 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19642 -i 0 00:18:07.013 [2024-11-17 04:34:45.704832] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19642: invalid cntlid range [0-65519] 00:18:07.013 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:07.013 { 00:18:07.013 "nqn": "nqn.2016-06.io.spdk:cnode19642", 00:18:07.013 "min_cntlid": 0, 00:18:07.013 "method": "nvmf_create_subsystem", 00:18:07.013 "req_id": 1 00:18:07.013 } 00:18:07.013 Got JSON-RPC error response 00:18:07.013 response: 00:18:07.013 { 00:18:07.013 "code": -32602, 00:18:07.013 "message": "Invalid cntlid range [0-65519]" 00:18:07.013 }' 00:18:07.013 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:07.013 { 00:18:07.013 "nqn": "nqn.2016-06.io.spdk:cnode19642", 00:18:07.013 "min_cntlid": 0, 00:18:07.013 "method": "nvmf_create_subsystem", 00:18:07.013 "req_id": 1 00:18:07.013 } 00:18:07.013 Got JSON-RPC error response 00:18:07.013 response: 00:18:07.013 { 00:18:07.013 "code": -32602, 00:18:07.013 "message": "Invalid cntlid range [0-65519]" 00:18:07.013 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.013 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25179 -i 65520 00:18:07.272 [2024-11-17 04:34:45.913575] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25179: invalid cntlid range [65520-65519] 00:18:07.272 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:07.272 { 00:18:07.272 "nqn": "nqn.2016-06.io.spdk:cnode25179", 00:18:07.272 "min_cntlid": 65520, 00:18:07.272 "method": "nvmf_create_subsystem", 00:18:07.272 "req_id": 1 00:18:07.272 } 00:18:07.272 Got JSON-RPC error response 00:18:07.272 response: 00:18:07.272 { 00:18:07.272 "code": -32602, 00:18:07.272 "message": "Invalid cntlid range [65520-65519]" 00:18:07.272 }' 00:18:07.272 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:07.272 { 00:18:07.272 "nqn": "nqn.2016-06.io.spdk:cnode25179", 00:18:07.272 "min_cntlid": 65520, 00:18:07.272 "method": "nvmf_create_subsystem", 00:18:07.272 "req_id": 1 00:18:07.272 } 00:18:07.272 Got JSON-RPC error response 00:18:07.272 response: 00:18:07.272 { 00:18:07.272 "code": -32602, 00:18:07.272 "message": "Invalid cntlid range [65520-65519]" 00:18:07.272 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.272 04:34:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29041 -I 0 00:18:07.533 [2024-11-17 04:34:46.122336] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29041: invalid cntlid range [1-0] 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:07.533 { 00:18:07.533 "nqn": "nqn.2016-06.io.spdk:cnode29041", 00:18:07.533 "max_cntlid": 0, 00:18:07.533 "method": "nvmf_create_subsystem", 00:18:07.533 "req_id": 1 00:18:07.533 } 00:18:07.533 Got JSON-RPC error response 00:18:07.533 response: 00:18:07.533 { 00:18:07.533 "code": -32602, 00:18:07.533 "message": "Invalid cntlid range [1-0]" 00:18:07.533 }' 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:07.533 { 00:18:07.533 "nqn": "nqn.2016-06.io.spdk:cnode29041", 00:18:07.533 "max_cntlid": 0, 00:18:07.533 "method": "nvmf_create_subsystem", 00:18:07.533 "req_id": 1 00:18:07.533 } 00:18:07.533 Got JSON-RPC error response 00:18:07.533 response: 00:18:07.533 { 00:18:07.533 "code": -32602, 00:18:07.533 "message": "Invalid cntlid range [1-0]" 00:18:07.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4653 -I 65520 00:18:07.533 [2024-11-17 04:34:46.327086] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4653: invalid cntlid range [1-65520] 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:07.533 { 00:18:07.533 "nqn": "nqn.2016-06.io.spdk:cnode4653", 00:18:07.533 "max_cntlid": 65520, 00:18:07.533 "method": "nvmf_create_subsystem", 00:18:07.533 "req_id": 1 00:18:07.533 } 00:18:07.533 Got JSON-RPC error response 00:18:07.533 response: 00:18:07.533 { 00:18:07.533 "code": -32602, 00:18:07.533 "message": "Invalid cntlid range [1-65520]" 00:18:07.533 }' 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:07.533 { 00:18:07.533 "nqn": "nqn.2016-06.io.spdk:cnode4653", 00:18:07.533 "max_cntlid": 65520, 00:18:07.533 "method": "nvmf_create_subsystem", 00:18:07.533 "req_id": 1 00:18:07.533 } 00:18:07.533 Got JSON-RPC error response 00:18:07.533 response: 00:18:07.533 { 00:18:07.533 "code": -32602, 00:18:07.533 "message": "Invalid cntlid range [1-65520]" 00:18:07.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.533 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22680 -i 6 -I 5 00:18:07.794 [2024-11-17 04:34:46.527819] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22680: invalid cntlid range [6-5] 00:18:07.794 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:07.794 { 00:18:07.794 "nqn": "nqn.2016-06.io.spdk:cnode22680", 00:18:07.794 "min_cntlid": 6, 00:18:07.794 "max_cntlid": 5, 00:18:07.794 "method": "nvmf_create_subsystem", 00:18:07.794 "req_id": 1 00:18:07.794 } 00:18:07.794 Got JSON-RPC error response 00:18:07.794 response: 00:18:07.794 { 00:18:07.794 "code": -32602, 00:18:07.794 "message": "Invalid cntlid range [6-5]" 00:18:07.794 }' 00:18:07.794 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:07.794 { 00:18:07.794 "nqn": "nqn.2016-06.io.spdk:cnode22680", 00:18:07.794 "min_cntlid": 6, 00:18:07.794 "max_cntlid": 5, 00:18:07.794 "method": "nvmf_create_subsystem", 00:18:07.794 "req_id": 1 00:18:07.794 } 00:18:07.794 Got JSON-RPC error response 00:18:07.794 response: 00:18:07.794 { 00:18:07.794 "code": -32602, 00:18:07.794 "message": "Invalid cntlid range [6-5]" 00:18:07.794 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.794 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:08.056 { 00:18:08.056 "name": "foobar", 00:18:08.056 "method": "nvmf_delete_target", 00:18:08.056 "req_id": 1 00:18:08.056 } 00:18:08.056 Got JSON-RPC error response 00:18:08.056 response: 00:18:08.056 { 00:18:08.056 "code": -32602, 00:18:08.056 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:08.056 }' 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:08.056 { 00:18:08.056 "name": "foobar", 00:18:08.056 "method": "nvmf_delete_target", 00:18:08.056 "req_id": 1 00:18:08.056 } 00:18:08.056 Got JSON-RPC error response 00:18:08.056 response: 00:18:08.056 { 00:18:08.056 "code": -32602, 00:18:08.056 "message": "The specified target doesn't exist, cannot delete it." 00:18:08.056 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:08.056 rmmod nvme_rdma 00:18:08.056 rmmod nvme_fabrics 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:08.056 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 292263 ']' 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 292263 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 292263 ']' 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 292263 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292263 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292263' 00:18:08.057 killing process with pid 292263 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 292263 00:18:08.057 04:34:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 292263 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:08.318 00:18:08.318 real 0m11.578s 00:18:08.318 user 0m19.833s 00:18:08.318 sys 0m6.852s 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.318 ************************************ 00:18:08.318 END TEST nvmf_invalid 00:18:08.318 ************************************ 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.318 ************************************ 00:18:08.318 START TEST nvmf_connect_stress 00:18:08.318 ************************************ 00:18:08.318 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:08.579 * Looking for test storage... 00:18:08.579 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.579 --rc genhtml_branch_coverage=1 00:18:08.579 --rc genhtml_function_coverage=1 00:18:08.579 --rc genhtml_legend=1 00:18:08.579 --rc geninfo_all_blocks=1 00:18:08.579 --rc geninfo_unexecuted_blocks=1 00:18:08.579 00:18:08.579 ' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.579 --rc genhtml_branch_coverage=1 00:18:08.579 --rc genhtml_function_coverage=1 00:18:08.579 --rc genhtml_legend=1 00:18:08.579 --rc geninfo_all_blocks=1 00:18:08.579 --rc geninfo_unexecuted_blocks=1 00:18:08.579 00:18:08.579 ' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.579 --rc genhtml_branch_coverage=1 00:18:08.579 --rc genhtml_function_coverage=1 00:18:08.579 --rc genhtml_legend=1 00:18:08.579 --rc geninfo_all_blocks=1 00:18:08.579 --rc geninfo_unexecuted_blocks=1 00:18:08.579 00:18:08.579 ' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.579 --rc genhtml_branch_coverage=1 00:18:08.579 --rc genhtml_function_coverage=1 00:18:08.579 --rc genhtml_legend=1 00:18:08.579 --rc geninfo_all_blocks=1 00:18:08.579 --rc geninfo_unexecuted_blocks=1 00:18:08.579 00:18:08.579 ' 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.579 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.580 04:34:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:16.734 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:16.734 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:16.734 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:16.734 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:16.734 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:16.734 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:16.734 altname enp217s0f0np0 00:18:16.734 altname ens818f0np0 00:18:16.734 inet 192.168.100.8/24 scope global mlx_0_0 00:18:16.734 valid_lft forever preferred_lft forever 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:16.734 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:16.734 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:16.734 altname enp217s0f1np1 00:18:16.734 altname ens818f1np1 00:18:16.734 inet 192.168.100.9/24 scope global mlx_0_1 00:18:16.734 valid_lft forever preferred_lft forever 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:16.734 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:16.735 192.168.100.9' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:16.735 192.168.100.9' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:16.735 192.168.100.9' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=296447 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 296447 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 296447 ']' 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 [2024-11-17 04:34:54.683739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:16.735 [2024-11-17 04:34:54.683804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.735 [2024-11-17 04:34:54.777450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.735 [2024-11-17 04:34:54.799825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.735 [2024-11-17 04:34:54.799861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.735 [2024-11-17 04:34:54.799871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.735 [2024-11-17 04:34:54.799879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.735 [2024-11-17 04:34:54.799886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.735 [2024-11-17 04:34:54.803951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.735 [2024-11-17 04:34:54.804056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.735 [2024-11-17 04:34:54.804057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 [2024-11-17 04:34:54.972112] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7b13d0/0x7b5880) succeed. 00:18:16.735 [2024-11-17 04:34:54.981309] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7b2970/0x7f6f20) succeed. 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 [2024-11-17 04:34:55.097963] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 NULL1 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=296481 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.735 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.306 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.306 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:17.306 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.306 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.306 04:34:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.566 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.566 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:17.566 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.566 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.566 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.826 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.826 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:17.826 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.826 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.826 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.087 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.087 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:18.087 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.087 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.087 04:34:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.658 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.658 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:18.658 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.658 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.658 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.918 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.918 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:18.918 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.918 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.918 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.178 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.178 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:19.178 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.179 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.179 04:34:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.439 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.439 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:19.439 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.439 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.439 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.699 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.699 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:19.699 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.699 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.699 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.270 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.270 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:20.270 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.270 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.270 04:34:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.535 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.535 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:20.535 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.535 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.535 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.794 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.795 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:20.795 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.795 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.795 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.055 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.055 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:21.055 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.055 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.055 04:34:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.315 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.315 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:21.315 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.315 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.315 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.886 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:21.886 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.886 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.886 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.146 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.146 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:22.146 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.146 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.146 04:35:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.405 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.405 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:22.405 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.405 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.405 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.666 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.666 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:22.666 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.666 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.666 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.252 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.252 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:23.252 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.252 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.252 04:35:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.518 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.518 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:23.518 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.518 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.518 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.793 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.793 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:23.793 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.793 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.793 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.073 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.073 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:24.073 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.073 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.073 04:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.342 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.342 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:24.342 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.342 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.342 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.613 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.613 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:24.613 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.613 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.613 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.911 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.911 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:24.911 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.911 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.911 04:35:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.512 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.512 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:25.512 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.512 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.512 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.791 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.791 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:25.791 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.791 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.791 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.065 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.065 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:26.065 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.065 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.065 04:35:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.337 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.337 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:26.337 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.337 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.337 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.607 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 296481 00:18:26.607 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (296481) - No such process 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 296481 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:26.607 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:26.608 rmmod nvme_rdma 00:18:26.608 rmmod nvme_fabrics 00:18:26.608 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 296447 ']' 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 296447 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 296447 ']' 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 296447 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296447 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296447' 00:18:26.886 killing process with pid 296447 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 296447 00:18:26.886 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 296447 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:27.171 00:18:27.171 real 0m18.612s 00:18:27.171 user 0m39.883s 00:18:27.171 sys 0m8.416s 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.171 ************************************ 00:18:27.171 END TEST nvmf_connect_stress 00:18:27.171 ************************************ 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.171 ************************************ 00:18:27.171 START TEST nvmf_fused_ordering 00:18:27.171 ************************************ 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:27.171 * Looking for test storage... 00:18:27.171 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:27.171 04:35:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:27.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.445 --rc genhtml_branch_coverage=1 00:18:27.445 --rc genhtml_function_coverage=1 00:18:27.445 --rc genhtml_legend=1 00:18:27.445 --rc geninfo_all_blocks=1 00:18:27.445 --rc geninfo_unexecuted_blocks=1 00:18:27.445 00:18:27.445 ' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:27.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.445 --rc genhtml_branch_coverage=1 00:18:27.445 --rc genhtml_function_coverage=1 00:18:27.445 --rc genhtml_legend=1 00:18:27.445 --rc geninfo_all_blocks=1 00:18:27.445 --rc geninfo_unexecuted_blocks=1 00:18:27.445 00:18:27.445 ' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:27.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.445 --rc genhtml_branch_coverage=1 00:18:27.445 --rc genhtml_function_coverage=1 00:18:27.445 --rc genhtml_legend=1 00:18:27.445 --rc geninfo_all_blocks=1 00:18:27.445 --rc geninfo_unexecuted_blocks=1 00:18:27.445 00:18:27.445 ' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:27.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.445 --rc genhtml_branch_coverage=1 00:18:27.445 --rc genhtml_function_coverage=1 00:18:27.445 --rc genhtml_legend=1 00:18:27.445 --rc geninfo_all_blocks=1 00:18:27.445 --rc geninfo_unexecuted_blocks=1 00:18:27.445 00:18:27.445 ' 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.445 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:27.446 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:27.446 04:35:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:34.183 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:34.184 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:34.184 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:34.184 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:34.184 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:34.480 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:34.480 04:35:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:34.480 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.480 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:34.480 altname enp217s0f0np0 00:18:34.480 altname ens818f0np0 00:18:34.480 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.480 valid_lft forever preferred_lft forever 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:34.480 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:34.481 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.481 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:34.481 altname enp217s0f1np1 00:18:34.481 altname ens818f1np1 00:18:34.481 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.481 valid_lft forever preferred_lft forever 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.481 192.168.100.9' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:34.481 192.168.100.9' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:34.481 192.168.100.9' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=301694 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 301694 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 301694 ']' 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.481 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.784 [2024-11-17 04:35:13.320837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:34.784 [2024-11-17 04:35:13.320894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.784 [2024-11-17 04:35:13.412008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.784 [2024-11-17 04:35:13.433684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.784 [2024-11-17 04:35:13.433719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.784 [2024-11-17 04:35:13.433728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.784 [2024-11-17 04:35:13.433737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.784 [2024-11-17 04:35:13.433744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.784 [2024-11-17 04:35:13.434325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.784 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.785 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:34.785 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.785 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 [2024-11-17 04:35:13.593814] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x89ff50/0x8a4400) succeed. 00:18:35.055 [2024-11-17 04:35:13.602568] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8a13b0/0x8e5aa0) succeed. 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 [2024-11-17 04:35:13.649313] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 NULL1 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.055 04:35:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:35.055 [2024-11-17 04:35:13.706843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:35.055 [2024-11-17 04:35:13.706880] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301835 ] 00:18:35.350 Attached to nqn.2016-06.io.spdk:cnode1 00:18:35.350 Namespace ID: 1 size: 1GB 00:18:35.350 fused_ordering(0) 00:18:35.350 fused_ordering(1) 00:18:35.350 fused_ordering(2) 00:18:35.350 fused_ordering(3) 00:18:35.350 fused_ordering(4) 00:18:35.350 fused_ordering(5) 00:18:35.350 fused_ordering(6) 00:18:35.350 fused_ordering(7) 00:18:35.350 fused_ordering(8) 00:18:35.350 fused_ordering(9) 00:18:35.350 fused_ordering(10) 00:18:35.350 fused_ordering(11) 00:18:35.350 fused_ordering(12) 00:18:35.350 fused_ordering(13) 00:18:35.350 fused_ordering(14) 00:18:35.350 fused_ordering(15) 00:18:35.350 fused_ordering(16) 00:18:35.350 fused_ordering(17) 00:18:35.350 fused_ordering(18) 00:18:35.350 fused_ordering(19) 00:18:35.350 fused_ordering(20) 00:18:35.350 fused_ordering(21) 00:18:35.350 fused_ordering(22) 00:18:35.350 fused_ordering(23) 00:18:35.350 fused_ordering(24) 00:18:35.350 fused_ordering(25) 00:18:35.350 fused_ordering(26) 00:18:35.350 fused_ordering(27) 00:18:35.350 fused_ordering(28) 00:18:35.350 fused_ordering(29) 00:18:35.350 fused_ordering(30) 00:18:35.350 fused_ordering(31) 00:18:35.350 fused_ordering(32) 00:18:35.350 fused_ordering(33) 00:18:35.350 fused_ordering(34) 00:18:35.350 fused_ordering(35) 00:18:35.350 fused_ordering(36) 00:18:35.350 fused_ordering(37) 00:18:35.350 fused_ordering(38) 00:18:35.350 fused_ordering(39) 00:18:35.350 fused_ordering(40) 00:18:35.350 fused_ordering(41) 00:18:35.350 fused_ordering(42) 00:18:35.350 fused_ordering(43) 00:18:35.350 fused_ordering(44) 00:18:35.350 fused_ordering(45) 00:18:35.350 fused_ordering(46) 00:18:35.350 fused_ordering(47) 00:18:35.350 fused_ordering(48) 00:18:35.351 fused_ordering(49) 00:18:35.351 fused_ordering(50) 00:18:35.351 fused_ordering(51) 00:18:35.351 fused_ordering(52) 00:18:35.351 fused_ordering(53) 00:18:35.351 fused_ordering(54) 00:18:35.351 fused_ordering(55) 00:18:35.351 fused_ordering(56) 00:18:35.351 fused_ordering(57) 00:18:35.351 fused_ordering(58) 00:18:35.351 fused_ordering(59) 00:18:35.351 fused_ordering(60) 00:18:35.351 fused_ordering(61) 00:18:35.351 fused_ordering(62) 00:18:35.351 fused_ordering(63) 00:18:35.351 fused_ordering(64) 00:18:35.351 fused_ordering(65) 00:18:35.351 fused_ordering(66) 00:18:35.351 fused_ordering(67) 00:18:35.351 fused_ordering(68) 00:18:35.351 fused_ordering(69) 00:18:35.351 fused_ordering(70) 00:18:35.351 fused_ordering(71) 00:18:35.351 fused_ordering(72) 00:18:35.351 fused_ordering(73) 00:18:35.351 fused_ordering(74) 00:18:35.351 fused_ordering(75) 00:18:35.351 fused_ordering(76) 00:18:35.351 fused_ordering(77) 00:18:35.351 fused_ordering(78) 00:18:35.351 fused_ordering(79) 00:18:35.351 fused_ordering(80) 00:18:35.351 fused_ordering(81) 00:18:35.351 fused_ordering(82) 00:18:35.351 fused_ordering(83) 00:18:35.351 fused_ordering(84) 00:18:35.351 fused_ordering(85) 00:18:35.351 fused_ordering(86) 00:18:35.351 fused_ordering(87) 00:18:35.351 fused_ordering(88) 00:18:35.351 fused_ordering(89) 00:18:35.351 fused_ordering(90) 00:18:35.351 fused_ordering(91) 00:18:35.351 fused_ordering(92) 00:18:35.351 fused_ordering(93) 00:18:35.351 fused_ordering(94) 00:18:35.351 fused_ordering(95) 00:18:35.351 fused_ordering(96) 00:18:35.351 fused_ordering(97) 00:18:35.351 fused_ordering(98) 00:18:35.351 fused_ordering(99) 00:18:35.351 fused_ordering(100) 00:18:35.351 fused_ordering(101) 00:18:35.351 fused_ordering(102) 00:18:35.351 fused_ordering(103) 00:18:35.351 fused_ordering(104) 00:18:35.351 fused_ordering(105) 00:18:35.351 fused_ordering(106) 00:18:35.351 fused_ordering(107) 00:18:35.351 fused_ordering(108) 00:18:35.351 fused_ordering(109) 00:18:35.351 fused_ordering(110) 00:18:35.351 fused_ordering(111) 00:18:35.351 fused_ordering(112) 00:18:35.351 fused_ordering(113) 00:18:35.351 fused_ordering(114) 00:18:35.351 fused_ordering(115) 00:18:35.351 fused_ordering(116) 00:18:35.351 fused_ordering(117) 00:18:35.351 fused_ordering(118) 00:18:35.351 fused_ordering(119) 00:18:35.351 fused_ordering(120) 00:18:35.351 fused_ordering(121) 00:18:35.351 fused_ordering(122) 00:18:35.351 fused_ordering(123) 00:18:35.351 fused_ordering(124) 00:18:35.351 fused_ordering(125) 00:18:35.351 fused_ordering(126) 00:18:35.351 fused_ordering(127) 00:18:35.351 fused_ordering(128) 00:18:35.351 fused_ordering(129) 00:18:35.351 fused_ordering(130) 00:18:35.351 fused_ordering(131) 00:18:35.351 fused_ordering(132) 00:18:35.351 fused_ordering(133) 00:18:35.351 fused_ordering(134) 00:18:35.351 fused_ordering(135) 00:18:35.351 fused_ordering(136) 00:18:35.351 fused_ordering(137) 00:18:35.351 fused_ordering(138) 00:18:35.351 fused_ordering(139) 00:18:35.351 fused_ordering(140) 00:18:35.351 fused_ordering(141) 00:18:35.351 fused_ordering(142) 00:18:35.351 fused_ordering(143) 00:18:35.351 fused_ordering(144) 00:18:35.351 fused_ordering(145) 00:18:35.351 fused_ordering(146) 00:18:35.351 fused_ordering(147) 00:18:35.351 fused_ordering(148) 00:18:35.351 fused_ordering(149) 00:18:35.351 fused_ordering(150) 00:18:35.351 fused_ordering(151) 00:18:35.351 fused_ordering(152) 00:18:35.351 fused_ordering(153) 00:18:35.351 fused_ordering(154) 00:18:35.351 fused_ordering(155) 00:18:35.351 fused_ordering(156) 00:18:35.351 fused_ordering(157) 00:18:35.351 fused_ordering(158) 00:18:35.351 fused_ordering(159) 00:18:35.351 fused_ordering(160) 00:18:35.351 fused_ordering(161) 00:18:35.351 fused_ordering(162) 00:18:35.351 fused_ordering(163) 00:18:35.351 fused_ordering(164) 00:18:35.351 fused_ordering(165) 00:18:35.351 fused_ordering(166) 00:18:35.351 fused_ordering(167) 00:18:35.351 fused_ordering(168) 00:18:35.351 fused_ordering(169) 00:18:35.351 fused_ordering(170) 00:18:35.351 fused_ordering(171) 00:18:35.351 fused_ordering(172) 00:18:35.351 fused_ordering(173) 00:18:35.351 fused_ordering(174) 00:18:35.351 fused_ordering(175) 00:18:35.351 fused_ordering(176) 00:18:35.351 fused_ordering(177) 00:18:35.351 fused_ordering(178) 00:18:35.351 fused_ordering(179) 00:18:35.351 fused_ordering(180) 00:18:35.351 fused_ordering(181) 00:18:35.351 fused_ordering(182) 00:18:35.351 fused_ordering(183) 00:18:35.351 fused_ordering(184) 00:18:35.351 fused_ordering(185) 00:18:35.351 fused_ordering(186) 00:18:35.351 fused_ordering(187) 00:18:35.351 fused_ordering(188) 00:18:35.351 fused_ordering(189) 00:18:35.351 fused_ordering(190) 00:18:35.351 fused_ordering(191) 00:18:35.351 fused_ordering(192) 00:18:35.351 fused_ordering(193) 00:18:35.351 fused_ordering(194) 00:18:35.351 fused_ordering(195) 00:18:35.351 fused_ordering(196) 00:18:35.351 fused_ordering(197) 00:18:35.351 fused_ordering(198) 00:18:35.351 fused_ordering(199) 00:18:35.351 fused_ordering(200) 00:18:35.351 fused_ordering(201) 00:18:35.351 fused_ordering(202) 00:18:35.351 fused_ordering(203) 00:18:35.351 fused_ordering(204) 00:18:35.351 fused_ordering(205) 00:18:35.351 fused_ordering(206) 00:18:35.351 fused_ordering(207) 00:18:35.351 fused_ordering(208) 00:18:35.351 fused_ordering(209) 00:18:35.351 fused_ordering(210) 00:18:35.351 fused_ordering(211) 00:18:35.351 fused_ordering(212) 00:18:35.351 fused_ordering(213) 00:18:35.351 fused_ordering(214) 00:18:35.351 fused_ordering(215) 00:18:35.351 fused_ordering(216) 00:18:35.351 fused_ordering(217) 00:18:35.351 fused_ordering(218) 00:18:35.351 fused_ordering(219) 00:18:35.351 fused_ordering(220) 00:18:35.351 fused_ordering(221) 00:18:35.351 fused_ordering(222) 00:18:35.351 fused_ordering(223) 00:18:35.351 fused_ordering(224) 00:18:35.351 fused_ordering(225) 00:18:35.351 fused_ordering(226) 00:18:35.351 fused_ordering(227) 00:18:35.351 fused_ordering(228) 00:18:35.351 fused_ordering(229) 00:18:35.351 fused_ordering(230) 00:18:35.351 fused_ordering(231) 00:18:35.351 fused_ordering(232) 00:18:35.351 fused_ordering(233) 00:18:35.351 fused_ordering(234) 00:18:35.351 fused_ordering(235) 00:18:35.351 fused_ordering(236) 00:18:35.351 fused_ordering(237) 00:18:35.351 fused_ordering(238) 00:18:35.351 fused_ordering(239) 00:18:35.351 fused_ordering(240) 00:18:35.351 fused_ordering(241) 00:18:35.351 fused_ordering(242) 00:18:35.351 fused_ordering(243) 00:18:35.351 fused_ordering(244) 00:18:35.351 fused_ordering(245) 00:18:35.351 fused_ordering(246) 00:18:35.351 fused_ordering(247) 00:18:35.351 fused_ordering(248) 00:18:35.351 fused_ordering(249) 00:18:35.351 fused_ordering(250) 00:18:35.351 fused_ordering(251) 00:18:35.351 fused_ordering(252) 00:18:35.351 fused_ordering(253) 00:18:35.351 fused_ordering(254) 00:18:35.351 fused_ordering(255) 00:18:35.351 fused_ordering(256) 00:18:35.351 fused_ordering(257) 00:18:35.351 fused_ordering(258) 00:18:35.351 fused_ordering(259) 00:18:35.351 fused_ordering(260) 00:18:35.351 fused_ordering(261) 00:18:35.351 fused_ordering(262) 00:18:35.351 fused_ordering(263) 00:18:35.351 fused_ordering(264) 00:18:35.351 fused_ordering(265) 00:18:35.351 fused_ordering(266) 00:18:35.351 fused_ordering(267) 00:18:35.351 fused_ordering(268) 00:18:35.351 fused_ordering(269) 00:18:35.351 fused_ordering(270) 00:18:35.351 fused_ordering(271) 00:18:35.351 fused_ordering(272) 00:18:35.351 fused_ordering(273) 00:18:35.351 fused_ordering(274) 00:18:35.351 fused_ordering(275) 00:18:35.351 fused_ordering(276) 00:18:35.351 fused_ordering(277) 00:18:35.351 fused_ordering(278) 00:18:35.351 fused_ordering(279) 00:18:35.351 fused_ordering(280) 00:18:35.351 fused_ordering(281) 00:18:35.351 fused_ordering(282) 00:18:35.351 fused_ordering(283) 00:18:35.351 fused_ordering(284) 00:18:35.351 fused_ordering(285) 00:18:35.351 fused_ordering(286) 00:18:35.351 fused_ordering(287) 00:18:35.351 fused_ordering(288) 00:18:35.351 fused_ordering(289) 00:18:35.351 fused_ordering(290) 00:18:35.351 fused_ordering(291) 00:18:35.351 fused_ordering(292) 00:18:35.351 fused_ordering(293) 00:18:35.351 fused_ordering(294) 00:18:35.351 fused_ordering(295) 00:18:35.351 fused_ordering(296) 00:18:35.351 fused_ordering(297) 00:18:35.351 fused_ordering(298) 00:18:35.351 fused_ordering(299) 00:18:35.351 fused_ordering(300) 00:18:35.351 fused_ordering(301) 00:18:35.351 fused_ordering(302) 00:18:35.351 fused_ordering(303) 00:18:35.351 fused_ordering(304) 00:18:35.351 fused_ordering(305) 00:18:35.351 fused_ordering(306) 00:18:35.351 fused_ordering(307) 00:18:35.351 fused_ordering(308) 00:18:35.351 fused_ordering(309) 00:18:35.351 fused_ordering(310) 00:18:35.351 fused_ordering(311) 00:18:35.351 fused_ordering(312) 00:18:35.351 fused_ordering(313) 00:18:35.351 fused_ordering(314) 00:18:35.351 fused_ordering(315) 00:18:35.351 fused_ordering(316) 00:18:35.351 fused_ordering(317) 00:18:35.351 fused_ordering(318) 00:18:35.352 fused_ordering(319) 00:18:35.352 fused_ordering(320) 00:18:35.352 fused_ordering(321) 00:18:35.352 fused_ordering(322) 00:18:35.352 fused_ordering(323) 00:18:35.352 fused_ordering(324) 00:18:35.352 fused_ordering(325) 00:18:35.352 fused_ordering(326) 00:18:35.352 fused_ordering(327) 00:18:35.352 fused_ordering(328) 00:18:35.352 fused_ordering(329) 00:18:35.352 fused_ordering(330) 00:18:35.352 fused_ordering(331) 00:18:35.352 fused_ordering(332) 00:18:35.352 fused_ordering(333) 00:18:35.352 fused_ordering(334) 00:18:35.352 fused_ordering(335) 00:18:35.352 fused_ordering(336) 00:18:35.352 fused_ordering(337) 00:18:35.352 fused_ordering(338) 00:18:35.352 fused_ordering(339) 00:18:35.352 fused_ordering(340) 00:18:35.352 fused_ordering(341) 00:18:35.352 fused_ordering(342) 00:18:35.352 fused_ordering(343) 00:18:35.352 fused_ordering(344) 00:18:35.352 fused_ordering(345) 00:18:35.352 fused_ordering(346) 00:18:35.352 fused_ordering(347) 00:18:35.352 fused_ordering(348) 00:18:35.352 fused_ordering(349) 00:18:35.352 fused_ordering(350) 00:18:35.352 fused_ordering(351) 00:18:35.352 fused_ordering(352) 00:18:35.352 fused_ordering(353) 00:18:35.352 fused_ordering(354) 00:18:35.352 fused_ordering(355) 00:18:35.352 fused_ordering(356) 00:18:35.352 fused_ordering(357) 00:18:35.352 fused_ordering(358) 00:18:35.352 fused_ordering(359) 00:18:35.352 fused_ordering(360) 00:18:35.352 fused_ordering(361) 00:18:35.352 fused_ordering(362) 00:18:35.352 fused_ordering(363) 00:18:35.352 fused_ordering(364) 00:18:35.352 fused_ordering(365) 00:18:35.352 fused_ordering(366) 00:18:35.352 fused_ordering(367) 00:18:35.352 fused_ordering(368) 00:18:35.352 fused_ordering(369) 00:18:35.352 fused_ordering(370) 00:18:35.352 fused_ordering(371) 00:18:35.352 fused_ordering(372) 00:18:35.352 fused_ordering(373) 00:18:35.352 fused_ordering(374) 00:18:35.352 fused_ordering(375) 00:18:35.352 fused_ordering(376) 00:18:35.352 fused_ordering(377) 00:18:35.352 fused_ordering(378) 00:18:35.352 fused_ordering(379) 00:18:35.352 fused_ordering(380) 00:18:35.352 fused_ordering(381) 00:18:35.352 fused_ordering(382) 00:18:35.352 fused_ordering(383) 00:18:35.352 fused_ordering(384) 00:18:35.352 fused_ordering(385) 00:18:35.352 fused_ordering(386) 00:18:35.352 fused_ordering(387) 00:18:35.352 fused_ordering(388) 00:18:35.352 fused_ordering(389) 00:18:35.352 fused_ordering(390) 00:18:35.352 fused_ordering(391) 00:18:35.352 fused_ordering(392) 00:18:35.352 fused_ordering(393) 00:18:35.352 fused_ordering(394) 00:18:35.352 fused_ordering(395) 00:18:35.352 fused_ordering(396) 00:18:35.352 fused_ordering(397) 00:18:35.352 fused_ordering(398) 00:18:35.352 fused_ordering(399) 00:18:35.352 fused_ordering(400) 00:18:35.352 fused_ordering(401) 00:18:35.352 fused_ordering(402) 00:18:35.352 fused_ordering(403) 00:18:35.352 fused_ordering(404) 00:18:35.352 fused_ordering(405) 00:18:35.352 fused_ordering(406) 00:18:35.352 fused_ordering(407) 00:18:35.352 fused_ordering(408) 00:18:35.352 fused_ordering(409) 00:18:35.352 fused_ordering(410) 00:18:35.352 fused_ordering(411) 00:18:35.352 fused_ordering(412) 00:18:35.352 fused_ordering(413) 00:18:35.352 fused_ordering(414) 00:18:35.352 fused_ordering(415) 00:18:35.352 fused_ordering(416) 00:18:35.352 fused_ordering(417) 00:18:35.352 fused_ordering(418) 00:18:35.352 fused_ordering(419) 00:18:35.352 fused_ordering(420) 00:18:35.352 fused_ordering(421) 00:18:35.352 fused_ordering(422) 00:18:35.352 fused_ordering(423) 00:18:35.352 fused_ordering(424) 00:18:35.352 fused_ordering(425) 00:18:35.352 fused_ordering(426) 00:18:35.352 fused_ordering(427) 00:18:35.352 fused_ordering(428) 00:18:35.352 fused_ordering(429) 00:18:35.352 fused_ordering(430) 00:18:35.352 fused_ordering(431) 00:18:35.352 fused_ordering(432) 00:18:35.352 fused_ordering(433) 00:18:35.352 fused_ordering(434) 00:18:35.352 fused_ordering(435) 00:18:35.352 fused_ordering(436) 00:18:35.352 fused_ordering(437) 00:18:35.352 fused_ordering(438) 00:18:35.352 fused_ordering(439) 00:18:35.352 fused_ordering(440) 00:18:35.352 fused_ordering(441) 00:18:35.352 fused_ordering(442) 00:18:35.352 fused_ordering(443) 00:18:35.352 fused_ordering(444) 00:18:35.352 fused_ordering(445) 00:18:35.352 fused_ordering(446) 00:18:35.352 fused_ordering(447) 00:18:35.352 fused_ordering(448) 00:18:35.352 fused_ordering(449) 00:18:35.352 fused_ordering(450) 00:18:35.352 fused_ordering(451) 00:18:35.352 fused_ordering(452) 00:18:35.352 fused_ordering(453) 00:18:35.352 fused_ordering(454) 00:18:35.352 fused_ordering(455) 00:18:35.352 fused_ordering(456) 00:18:35.352 fused_ordering(457) 00:18:35.352 fused_ordering(458) 00:18:35.352 fused_ordering(459) 00:18:35.352 fused_ordering(460) 00:18:35.352 fused_ordering(461) 00:18:35.352 fused_ordering(462) 00:18:35.352 fused_ordering(463) 00:18:35.352 fused_ordering(464) 00:18:35.352 fused_ordering(465) 00:18:35.352 fused_ordering(466) 00:18:35.352 fused_ordering(467) 00:18:35.352 fused_ordering(468) 00:18:35.352 fused_ordering(469) 00:18:35.352 fused_ordering(470) 00:18:35.352 fused_ordering(471) 00:18:35.352 fused_ordering(472) 00:18:35.352 fused_ordering(473) 00:18:35.352 fused_ordering(474) 00:18:35.352 fused_ordering(475) 00:18:35.352 fused_ordering(476) 00:18:35.352 fused_ordering(477) 00:18:35.352 fused_ordering(478) 00:18:35.352 fused_ordering(479) 00:18:35.352 fused_ordering(480) 00:18:35.352 fused_ordering(481) 00:18:35.352 fused_ordering(482) 00:18:35.352 fused_ordering(483) 00:18:35.352 fused_ordering(484) 00:18:35.352 fused_ordering(485) 00:18:35.352 fused_ordering(486) 00:18:35.352 fused_ordering(487) 00:18:35.352 fused_ordering(488) 00:18:35.352 fused_ordering(489) 00:18:35.352 fused_ordering(490) 00:18:35.352 fused_ordering(491) 00:18:35.352 fused_ordering(492) 00:18:35.352 fused_ordering(493) 00:18:35.352 fused_ordering(494) 00:18:35.352 fused_ordering(495) 00:18:35.352 fused_ordering(496) 00:18:35.352 fused_ordering(497) 00:18:35.352 fused_ordering(498) 00:18:35.352 fused_ordering(499) 00:18:35.352 fused_ordering(500) 00:18:35.352 fused_ordering(501) 00:18:35.352 fused_ordering(502) 00:18:35.352 fused_ordering(503) 00:18:35.352 fused_ordering(504) 00:18:35.352 fused_ordering(505) 00:18:35.352 fused_ordering(506) 00:18:35.352 fused_ordering(507) 00:18:35.352 fused_ordering(508) 00:18:35.352 fused_ordering(509) 00:18:35.352 fused_ordering(510) 00:18:35.352 fused_ordering(511) 00:18:35.352 fused_ordering(512) 00:18:35.352 fused_ordering(513) 00:18:35.352 fused_ordering(514) 00:18:35.352 fused_ordering(515) 00:18:35.352 fused_ordering(516) 00:18:35.352 fused_ordering(517) 00:18:35.352 fused_ordering(518) 00:18:35.352 fused_ordering(519) 00:18:35.352 fused_ordering(520) 00:18:35.352 fused_ordering(521) 00:18:35.352 fused_ordering(522) 00:18:35.352 fused_ordering(523) 00:18:35.352 fused_ordering(524) 00:18:35.352 fused_ordering(525) 00:18:35.352 fused_ordering(526) 00:18:35.352 fused_ordering(527) 00:18:35.352 fused_ordering(528) 00:18:35.352 fused_ordering(529) 00:18:35.352 fused_ordering(530) 00:18:35.352 fused_ordering(531) 00:18:35.352 fused_ordering(532) 00:18:35.352 fused_ordering(533) 00:18:35.352 fused_ordering(534) 00:18:35.352 fused_ordering(535) 00:18:35.352 fused_ordering(536) 00:18:35.352 fused_ordering(537) 00:18:35.352 fused_ordering(538) 00:18:35.352 fused_ordering(539) 00:18:35.352 fused_ordering(540) 00:18:35.352 fused_ordering(541) 00:18:35.352 fused_ordering(542) 00:18:35.352 fused_ordering(543) 00:18:35.352 fused_ordering(544) 00:18:35.352 fused_ordering(545) 00:18:35.352 fused_ordering(546) 00:18:35.352 fused_ordering(547) 00:18:35.352 fused_ordering(548) 00:18:35.352 fused_ordering(549) 00:18:35.352 fused_ordering(550) 00:18:35.352 fused_ordering(551) 00:18:35.352 fused_ordering(552) 00:18:35.352 fused_ordering(553) 00:18:35.352 fused_ordering(554) 00:18:35.352 fused_ordering(555) 00:18:35.352 fused_ordering(556) 00:18:35.352 fused_ordering(557) 00:18:35.352 fused_ordering(558) 00:18:35.352 fused_ordering(559) 00:18:35.352 fused_ordering(560) 00:18:35.352 fused_ordering(561) 00:18:35.352 fused_ordering(562) 00:18:35.352 fused_ordering(563) 00:18:35.352 fused_ordering(564) 00:18:35.352 fused_ordering(565) 00:18:35.352 fused_ordering(566) 00:18:35.352 fused_ordering(567) 00:18:35.352 fused_ordering(568) 00:18:35.352 fused_ordering(569) 00:18:35.352 fused_ordering(570) 00:18:35.352 fused_ordering(571) 00:18:35.352 fused_ordering(572) 00:18:35.352 fused_ordering(573) 00:18:35.352 fused_ordering(574) 00:18:35.352 fused_ordering(575) 00:18:35.352 fused_ordering(576) 00:18:35.352 fused_ordering(577) 00:18:35.352 fused_ordering(578) 00:18:35.352 fused_ordering(579) 00:18:35.352 fused_ordering(580) 00:18:35.352 fused_ordering(581) 00:18:35.352 fused_ordering(582) 00:18:35.352 fused_ordering(583) 00:18:35.352 fused_ordering(584) 00:18:35.352 fused_ordering(585) 00:18:35.352 fused_ordering(586) 00:18:35.352 fused_ordering(587) 00:18:35.352 fused_ordering(588) 00:18:35.352 fused_ordering(589) 00:18:35.352 fused_ordering(590) 00:18:35.353 fused_ordering(591) 00:18:35.353 fused_ordering(592) 00:18:35.353 fused_ordering(593) 00:18:35.353 fused_ordering(594) 00:18:35.353 fused_ordering(595) 00:18:35.353 fused_ordering(596) 00:18:35.353 fused_ordering(597) 00:18:35.353 fused_ordering(598) 00:18:35.353 fused_ordering(599) 00:18:35.353 fused_ordering(600) 00:18:35.353 fused_ordering(601) 00:18:35.353 fused_ordering(602) 00:18:35.353 fused_ordering(603) 00:18:35.353 fused_ordering(604) 00:18:35.353 fused_ordering(605) 00:18:35.353 fused_ordering(606) 00:18:35.353 fused_ordering(607) 00:18:35.353 fused_ordering(608) 00:18:35.353 fused_ordering(609) 00:18:35.353 fused_ordering(610) 00:18:35.353 fused_ordering(611) 00:18:35.353 fused_ordering(612) 00:18:35.353 fused_ordering(613) 00:18:35.353 fused_ordering(614) 00:18:35.353 fused_ordering(615) 00:18:35.631 fused_ordering(616) 00:18:35.631 fused_ordering(617) 00:18:35.631 fused_ordering(618) 00:18:35.631 fused_ordering(619) 00:18:35.631 fused_ordering(620) 00:18:35.631 fused_ordering(621) 00:18:35.631 fused_ordering(622) 00:18:35.631 fused_ordering(623) 00:18:35.631 fused_ordering(624) 00:18:35.631 fused_ordering(625) 00:18:35.631 fused_ordering(626) 00:18:35.631 fused_ordering(627) 00:18:35.631 fused_ordering(628) 00:18:35.631 fused_ordering(629) 00:18:35.631 fused_ordering(630) 00:18:35.631 fused_ordering(631) 00:18:35.631 fused_ordering(632) 00:18:35.631 fused_ordering(633) 00:18:35.631 fused_ordering(634) 00:18:35.631 fused_ordering(635) 00:18:35.631 fused_ordering(636) 00:18:35.631 fused_ordering(637) 00:18:35.631 fused_ordering(638) 00:18:35.631 fused_ordering(639) 00:18:35.631 fused_ordering(640) 00:18:35.631 fused_ordering(641) 00:18:35.631 fused_ordering(642) 00:18:35.631 fused_ordering(643) 00:18:35.631 fused_ordering(644) 00:18:35.631 fused_ordering(645) 00:18:35.631 fused_ordering(646) 00:18:35.631 fused_ordering(647) 00:18:35.631 fused_ordering(648) 00:18:35.631 fused_ordering(649) 00:18:35.631 fused_ordering(650) 00:18:35.631 fused_ordering(651) 00:18:35.631 fused_ordering(652) 00:18:35.631 fused_ordering(653) 00:18:35.631 fused_ordering(654) 00:18:35.631 fused_ordering(655) 00:18:35.631 fused_ordering(656) 00:18:35.631 fused_ordering(657) 00:18:35.631 fused_ordering(658) 00:18:35.631 fused_ordering(659) 00:18:35.631 fused_ordering(660) 00:18:35.631 fused_ordering(661) 00:18:35.631 fused_ordering(662) 00:18:35.631 fused_ordering(663) 00:18:35.631 fused_ordering(664) 00:18:35.631 fused_ordering(665) 00:18:35.631 fused_ordering(666) 00:18:35.631 fused_ordering(667) 00:18:35.631 fused_ordering(668) 00:18:35.631 fused_ordering(669) 00:18:35.631 fused_ordering(670) 00:18:35.631 fused_ordering(671) 00:18:35.631 fused_ordering(672) 00:18:35.631 fused_ordering(673) 00:18:35.631 fused_ordering(674) 00:18:35.631 fused_ordering(675) 00:18:35.631 fused_ordering(676) 00:18:35.631 fused_ordering(677) 00:18:35.631 fused_ordering(678) 00:18:35.631 fused_ordering(679) 00:18:35.631 fused_ordering(680) 00:18:35.632 fused_ordering(681) 00:18:35.632 fused_ordering(682) 00:18:35.632 fused_ordering(683) 00:18:35.632 fused_ordering(684) 00:18:35.632 fused_ordering(685) 00:18:35.632 fused_ordering(686) 00:18:35.632 fused_ordering(687) 00:18:35.632 fused_ordering(688) 00:18:35.632 fused_ordering(689) 00:18:35.632 fused_ordering(690) 00:18:35.632 fused_ordering(691) 00:18:35.632 fused_ordering(692) 00:18:35.632 fused_ordering(693) 00:18:35.632 fused_ordering(694) 00:18:35.632 fused_ordering(695) 00:18:35.632 fused_ordering(696) 00:18:35.632 fused_ordering(697) 00:18:35.632 fused_ordering(698) 00:18:35.632 fused_ordering(699) 00:18:35.632 fused_ordering(700) 00:18:35.632 fused_ordering(701) 00:18:35.632 fused_ordering(702) 00:18:35.632 fused_ordering(703) 00:18:35.632 fused_ordering(704) 00:18:35.632 fused_ordering(705) 00:18:35.632 fused_ordering(706) 00:18:35.632 fused_ordering(707) 00:18:35.632 fused_ordering(708) 00:18:35.632 fused_ordering(709) 00:18:35.632 fused_ordering(710) 00:18:35.632 fused_ordering(711) 00:18:35.632 fused_ordering(712) 00:18:35.632 fused_ordering(713) 00:18:35.632 fused_ordering(714) 00:18:35.632 fused_ordering(715) 00:18:35.632 fused_ordering(716) 00:18:35.632 fused_ordering(717) 00:18:35.632 fused_ordering(718) 00:18:35.632 fused_ordering(719) 00:18:35.632 fused_ordering(720) 00:18:35.632 fused_ordering(721) 00:18:35.632 fused_ordering(722) 00:18:35.632 fused_ordering(723) 00:18:35.632 fused_ordering(724) 00:18:35.632 fused_ordering(725) 00:18:35.632 fused_ordering(726) 00:18:35.632 fused_ordering(727) 00:18:35.632 fused_ordering(728) 00:18:35.632 fused_ordering(729) 00:18:35.632 fused_ordering(730) 00:18:35.632 fused_ordering(731) 00:18:35.632 fused_ordering(732) 00:18:35.632 fused_ordering(733) 00:18:35.632 fused_ordering(734) 00:18:35.632 fused_ordering(735) 00:18:35.632 fused_ordering(736) 00:18:35.632 fused_ordering(737) 00:18:35.632 fused_ordering(738) 00:18:35.632 fused_ordering(739) 00:18:35.632 fused_ordering(740) 00:18:35.632 fused_ordering(741) 00:18:35.632 fused_ordering(742) 00:18:35.632 fused_ordering(743) 00:18:35.632 fused_ordering(744) 00:18:35.632 fused_ordering(745) 00:18:35.632 fused_ordering(746) 00:18:35.632 fused_ordering(747) 00:18:35.632 fused_ordering(748) 00:18:35.632 fused_ordering(749) 00:18:35.632 fused_ordering(750) 00:18:35.632 fused_ordering(751) 00:18:35.632 fused_ordering(752) 00:18:35.632 fused_ordering(753) 00:18:35.632 fused_ordering(754) 00:18:35.632 fused_ordering(755) 00:18:35.632 fused_ordering(756) 00:18:35.632 fused_ordering(757) 00:18:35.632 fused_ordering(758) 00:18:35.632 fused_ordering(759) 00:18:35.632 fused_ordering(760) 00:18:35.632 fused_ordering(761) 00:18:35.632 fused_ordering(762) 00:18:35.632 fused_ordering(763) 00:18:35.632 fused_ordering(764) 00:18:35.632 fused_ordering(765) 00:18:35.632 fused_ordering(766) 00:18:35.632 fused_ordering(767) 00:18:35.632 fused_ordering(768) 00:18:35.632 fused_ordering(769) 00:18:35.632 fused_ordering(770) 00:18:35.632 fused_ordering(771) 00:18:35.632 fused_ordering(772) 00:18:35.632 fused_ordering(773) 00:18:35.632 fused_ordering(774) 00:18:35.632 fused_ordering(775) 00:18:35.632 fused_ordering(776) 00:18:35.632 fused_ordering(777) 00:18:35.632 fused_ordering(778) 00:18:35.632 fused_ordering(779) 00:18:35.632 fused_ordering(780) 00:18:35.632 fused_ordering(781) 00:18:35.632 fused_ordering(782) 00:18:35.632 fused_ordering(783) 00:18:35.632 fused_ordering(784) 00:18:35.632 fused_ordering(785) 00:18:35.632 fused_ordering(786) 00:18:35.632 fused_ordering(787) 00:18:35.632 fused_ordering(788) 00:18:35.632 fused_ordering(789) 00:18:35.632 fused_ordering(790) 00:18:35.632 fused_ordering(791) 00:18:35.632 fused_ordering(792) 00:18:35.632 fused_ordering(793) 00:18:35.632 fused_ordering(794) 00:18:35.632 fused_ordering(795) 00:18:35.632 fused_ordering(796) 00:18:35.632 fused_ordering(797) 00:18:35.632 fused_ordering(798) 00:18:35.632 fused_ordering(799) 00:18:35.632 fused_ordering(800) 00:18:35.632 fused_ordering(801) 00:18:35.632 fused_ordering(802) 00:18:35.632 fused_ordering(803) 00:18:35.632 fused_ordering(804) 00:18:35.632 fused_ordering(805) 00:18:35.632 fused_ordering(806) 00:18:35.632 fused_ordering(807) 00:18:35.632 fused_ordering(808) 00:18:35.632 fused_ordering(809) 00:18:35.632 fused_ordering(810) 00:18:35.632 fused_ordering(811) 00:18:35.632 fused_ordering(812) 00:18:35.632 fused_ordering(813) 00:18:35.632 fused_ordering(814) 00:18:35.632 fused_ordering(815) 00:18:35.632 fused_ordering(816) 00:18:35.632 fused_ordering(817) 00:18:35.632 fused_ordering(818) 00:18:35.632 fused_ordering(819) 00:18:35.632 fused_ordering(820) 00:18:35.632 fused_ordering(821) 00:18:35.632 fused_ordering(822) 00:18:35.632 fused_ordering(823) 00:18:35.632 fused_ordering(824) 00:18:35.632 fused_ordering(825) 00:18:35.632 fused_ordering(826) 00:18:35.632 fused_ordering(827) 00:18:35.632 fused_ordering(828) 00:18:35.632 fused_ordering(829) 00:18:35.632 fused_ordering(830) 00:18:35.632 fused_ordering(831) 00:18:35.632 fused_ordering(832) 00:18:35.632 fused_ordering(833) 00:18:35.632 fused_ordering(834) 00:18:35.632 fused_ordering(835) 00:18:35.632 fused_ordering(836) 00:18:35.632 fused_ordering(837) 00:18:35.632 fused_ordering(838) 00:18:35.632 fused_ordering(839) 00:18:35.632 fused_ordering(840) 00:18:35.632 fused_ordering(841) 00:18:35.632 fused_ordering(842) 00:18:35.632 fused_ordering(843) 00:18:35.632 fused_ordering(844) 00:18:35.632 fused_ordering(845) 00:18:35.632 fused_ordering(846) 00:18:35.632 fused_ordering(847) 00:18:35.632 fused_ordering(848) 00:18:35.632 fused_ordering(849) 00:18:35.632 fused_ordering(850) 00:18:35.632 fused_ordering(851) 00:18:35.632 fused_ordering(852) 00:18:35.632 fused_ordering(853) 00:18:35.632 fused_ordering(854) 00:18:35.632 fused_ordering(855) 00:18:35.632 fused_ordering(856) 00:18:35.632 fused_ordering(857) 00:18:35.632 fused_ordering(858) 00:18:35.632 fused_ordering(859) 00:18:35.632 fused_ordering(860) 00:18:35.632 fused_ordering(861) 00:18:35.632 fused_ordering(862) 00:18:35.632 fused_ordering(863) 00:18:35.632 fused_ordering(864) 00:18:35.632 fused_ordering(865) 00:18:35.632 fused_ordering(866) 00:18:35.632 fused_ordering(867) 00:18:35.632 fused_ordering(868) 00:18:35.632 fused_ordering(869) 00:18:35.632 fused_ordering(870) 00:18:35.632 fused_ordering(871) 00:18:35.632 fused_ordering(872) 00:18:35.632 fused_ordering(873) 00:18:35.632 fused_ordering(874) 00:18:35.632 fused_ordering(875) 00:18:35.632 fused_ordering(876) 00:18:35.632 fused_ordering(877) 00:18:35.632 fused_ordering(878) 00:18:35.632 fused_ordering(879) 00:18:35.632 fused_ordering(880) 00:18:35.632 fused_ordering(881) 00:18:35.632 fused_ordering(882) 00:18:35.632 fused_ordering(883) 00:18:35.632 fused_ordering(884) 00:18:35.632 fused_ordering(885) 00:18:35.632 fused_ordering(886) 00:18:35.632 fused_ordering(887) 00:18:35.632 fused_ordering(888) 00:18:35.632 fused_ordering(889) 00:18:35.632 fused_ordering(890) 00:18:35.632 fused_ordering(891) 00:18:35.632 fused_ordering(892) 00:18:35.632 fused_ordering(893) 00:18:35.632 fused_ordering(894) 00:18:35.632 fused_ordering(895) 00:18:35.632 fused_ordering(896) 00:18:35.632 fused_ordering(897) 00:18:35.632 fused_ordering(898) 00:18:35.632 fused_ordering(899) 00:18:35.632 fused_ordering(900) 00:18:35.632 fused_ordering(901) 00:18:35.632 fused_ordering(902) 00:18:35.632 fused_ordering(903) 00:18:35.632 fused_ordering(904) 00:18:35.632 fused_ordering(905) 00:18:35.632 fused_ordering(906) 00:18:35.632 fused_ordering(907) 00:18:35.632 fused_ordering(908) 00:18:35.632 fused_ordering(909) 00:18:35.632 fused_ordering(910) 00:18:35.632 fused_ordering(911) 00:18:35.632 fused_ordering(912) 00:18:35.632 fused_ordering(913) 00:18:35.632 fused_ordering(914) 00:18:35.632 fused_ordering(915) 00:18:35.632 fused_ordering(916) 00:18:35.632 fused_ordering(917) 00:18:35.632 fused_ordering(918) 00:18:35.632 fused_ordering(919) 00:18:35.632 fused_ordering(920) 00:18:35.632 fused_ordering(921) 00:18:35.632 fused_ordering(922) 00:18:35.632 fused_ordering(923) 00:18:35.632 fused_ordering(924) 00:18:35.632 fused_ordering(925) 00:18:35.632 fused_ordering(926) 00:18:35.632 fused_ordering(927) 00:18:35.632 fused_ordering(928) 00:18:35.632 fused_ordering(929) 00:18:35.632 fused_ordering(930) 00:18:35.632 fused_ordering(931) 00:18:35.632 fused_ordering(932) 00:18:35.632 fused_ordering(933) 00:18:35.632 fused_ordering(934) 00:18:35.632 fused_ordering(935) 00:18:35.632 fused_ordering(936) 00:18:35.632 fused_ordering(937) 00:18:35.632 fused_ordering(938) 00:18:35.632 fused_ordering(939) 00:18:35.632 fused_ordering(940) 00:18:35.632 fused_ordering(941) 00:18:35.632 fused_ordering(942) 00:18:35.632 fused_ordering(943) 00:18:35.632 fused_ordering(944) 00:18:35.632 fused_ordering(945) 00:18:35.632 fused_ordering(946) 00:18:35.632 fused_ordering(947) 00:18:35.632 fused_ordering(948) 00:18:35.632 fused_ordering(949) 00:18:35.632 fused_ordering(950) 00:18:35.633 fused_ordering(951) 00:18:35.633 fused_ordering(952) 00:18:35.633 fused_ordering(953) 00:18:35.633 fused_ordering(954) 00:18:35.633 fused_ordering(955) 00:18:35.633 fused_ordering(956) 00:18:35.633 fused_ordering(957) 00:18:35.633 fused_ordering(958) 00:18:35.633 fused_ordering(959) 00:18:35.633 fused_ordering(960) 00:18:35.633 fused_ordering(961) 00:18:35.633 fused_ordering(962) 00:18:35.633 fused_ordering(963) 00:18:35.633 fused_ordering(964) 00:18:35.633 fused_ordering(965) 00:18:35.633 fused_ordering(966) 00:18:35.633 fused_ordering(967) 00:18:35.633 fused_ordering(968) 00:18:35.633 fused_ordering(969) 00:18:35.633 fused_ordering(970) 00:18:35.633 fused_ordering(971) 00:18:35.633 fused_ordering(972) 00:18:35.633 fused_ordering(973) 00:18:35.633 fused_ordering(974) 00:18:35.633 fused_ordering(975) 00:18:35.633 fused_ordering(976) 00:18:35.633 fused_ordering(977) 00:18:35.633 fused_ordering(978) 00:18:35.633 fused_ordering(979) 00:18:35.633 fused_ordering(980) 00:18:35.633 fused_ordering(981) 00:18:35.633 fused_ordering(982) 00:18:35.633 fused_ordering(983) 00:18:35.633 fused_ordering(984) 00:18:35.633 fused_ordering(985) 00:18:35.633 fused_ordering(986) 00:18:35.633 fused_ordering(987) 00:18:35.633 fused_ordering(988) 00:18:35.633 fused_ordering(989) 00:18:35.633 fused_ordering(990) 00:18:35.633 fused_ordering(991) 00:18:35.633 fused_ordering(992) 00:18:35.633 fused_ordering(993) 00:18:35.633 fused_ordering(994) 00:18:35.633 fused_ordering(995) 00:18:35.633 fused_ordering(996) 00:18:35.633 fused_ordering(997) 00:18:35.633 fused_ordering(998) 00:18:35.633 fused_ordering(999) 00:18:35.633 fused_ordering(1000) 00:18:35.633 fused_ordering(1001) 00:18:35.633 fused_ordering(1002) 00:18:35.633 fused_ordering(1003) 00:18:35.633 fused_ordering(1004) 00:18:35.633 fused_ordering(1005) 00:18:35.633 fused_ordering(1006) 00:18:35.633 fused_ordering(1007) 00:18:35.633 fused_ordering(1008) 00:18:35.633 fused_ordering(1009) 00:18:35.633 fused_ordering(1010) 00:18:35.633 fused_ordering(1011) 00:18:35.633 fused_ordering(1012) 00:18:35.633 fused_ordering(1013) 00:18:35.633 fused_ordering(1014) 00:18:35.633 fused_ordering(1015) 00:18:35.633 fused_ordering(1016) 00:18:35.633 fused_ordering(1017) 00:18:35.633 fused_ordering(1018) 00:18:35.633 fused_ordering(1019) 00:18:35.633 fused_ordering(1020) 00:18:35.633 fused_ordering(1021) 00:18:35.633 fused_ordering(1022) 00:18:35.633 fused_ordering(1023) 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:35.633 rmmod nvme_rdma 00:18:35.633 rmmod nvme_fabrics 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 301694 ']' 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 301694 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 301694 ']' 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 301694 00:18:35.633 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301694 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301694' 00:18:35.905 killing process with pid 301694 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 301694 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 301694 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:35.905 00:18:35.905 real 0m8.881s 00:18:35.905 user 0m4.219s 00:18:35.905 sys 0m5.929s 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.905 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:35.905 ************************************ 00:18:35.905 END TEST nvmf_fused_ordering 00:18:35.905 ************************************ 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 ************************************ 00:18:36.179 START TEST nvmf_ns_masking 00:18:36.179 ************************************ 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:36.179 * Looking for test storage... 00:18:36.179 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.179 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.180 --rc genhtml_branch_coverage=1 00:18:36.180 --rc genhtml_function_coverage=1 00:18:36.180 --rc genhtml_legend=1 00:18:36.180 --rc geninfo_all_blocks=1 00:18:36.180 --rc geninfo_unexecuted_blocks=1 00:18:36.180 00:18:36.180 ' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.180 --rc genhtml_branch_coverage=1 00:18:36.180 --rc genhtml_function_coverage=1 00:18:36.180 --rc genhtml_legend=1 00:18:36.180 --rc geninfo_all_blocks=1 00:18:36.180 --rc geninfo_unexecuted_blocks=1 00:18:36.180 00:18:36.180 ' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.180 --rc genhtml_branch_coverage=1 00:18:36.180 --rc genhtml_function_coverage=1 00:18:36.180 --rc genhtml_legend=1 00:18:36.180 --rc geninfo_all_blocks=1 00:18:36.180 --rc geninfo_unexecuted_blocks=1 00:18:36.180 00:18:36.180 ' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.180 --rc genhtml_branch_coverage=1 00:18:36.180 --rc genhtml_function_coverage=1 00:18:36.180 --rc genhtml_legend=1 00:18:36.180 --rc geninfo_all_blocks=1 00:18:36.180 --rc geninfo_unexecuted_blocks=1 00:18:36.180 00:18:36.180 ' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:36.180 04:35:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.180 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.180 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.180 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3da9ca25-3185-4ef9-a3d2-d2db29688370 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6cd51ad6-f34a-4e0b-8ef0-0ad65b708e93 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=20075e62-2d7b-4173-adc9-1004e464f55a 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.452 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.453 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.453 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:36.453 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:36.453 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.453 04:35:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.588 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:44.589 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:44.589 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:44.589 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:44.589 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.589 04:35:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.589 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:44.590 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.590 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:44.590 altname enp217s0f0np0 00:18:44.590 altname ens818f0np0 00:18:44.590 inet 192.168.100.8/24 scope global mlx_0_0 00:18:44.590 valid_lft forever preferred_lft forever 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:44.590 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.590 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:44.590 altname enp217s0f1np1 00:18:44.590 altname ens818f1np1 00:18:44.590 inet 192.168.100.9/24 scope global mlx_0_1 00:18:44.590 valid_lft forever preferred_lft forever 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:44.590 192.168.100.9' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:44.590 192.168.100.9' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:44.590 192.168.100.9' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=305287 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 305287 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 305287 ']' 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.590 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.590 [2024-11-17 04:35:22.231277] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:44.590 [2024-11-17 04:35:22.231328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.591 [2024-11-17 04:35:22.322368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.591 [2024-11-17 04:35:22.343496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.591 [2024-11-17 04:35:22.343534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.591 [2024-11-17 04:35:22.343544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.591 [2024-11-17 04:35:22.343552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.591 [2024-11-17 04:35:22.343559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.591 [2024-11-17 04:35:22.344175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:44.591 [2024-11-17 04:35:22.671610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xef9c50/0xefe100) succeed. 00:18:44.591 [2024-11-17 04:35:22.680444] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xefb0b0/0xf3f7a0) succeed. 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:44.591 Malloc1 00:18:44.591 04:35:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:44.591 Malloc2 00:18:44.591 04:35:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:44.591 04:35:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:44.851 04:35:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:45.110 [2024-11-17 04:35:23.739814] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:45.110 04:35:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:45.111 04:35:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 20075e62-2d7b-4173-adc9-1004e464f55a -a 192.168.100.8 -s 4420 -i 4 00:18:45.371 04:35:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:45.371 04:35:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:45.371 04:35:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.371 04:35:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:45.371 04:35:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:47.282 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:47.541 [ 0]:0x1 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ec219be808e464da58bd42a0dae0d4f 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ec219be808e464da58bd42a0dae0d4f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.541 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:47.801 [ 0]:0x1 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.801 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ec219be808e464da58bd42a0dae0d4f 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ec219be808e464da58bd42a0dae0d4f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:47.802 [ 1]:0x2 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:47.802 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.061 04:35:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.321 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:48.580 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:48.580 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 20075e62-2d7b-4173-adc9-1004e464f55a -a 192.168.100.8 -s 4420 -i 4 00:18:48.838 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:48.838 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:48.839 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.839 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:48.839 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:48.839 04:35:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:50.746 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:50.746 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:50.746 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.005 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.006 [ 0]:0x2 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.006 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.265 [ 0]:0x1 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ec219be808e464da58bd42a0dae0d4f 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ec219be808e464da58bd42a0dae0d4f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.265 04:35:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.265 [ 1]:0x2 00:18:51.265 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.265 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.265 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:51.265 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.265 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.524 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.525 [ 0]:0x2 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:51.525 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.092 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:52.092 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:52.092 04:35:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 20075e62-2d7b-4173-adc9-1004e464f55a -a 192.168.100.8 -s 4420 -i 4 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:52.352 04:35:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:54.892 [ 0]:0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ec219be808e464da58bd42a0dae0d4f 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ec219be808e464da58bd42a0dae0d4f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:54.892 [ 1]:0x2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:54.892 [ 0]:0x2 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.892 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.893 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:54.893 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:55.151 [2024-11-17 04:35:33.812575] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:55.151 request: 00:18:55.151 { 00:18:55.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.151 "nsid": 2, 00:18:55.151 "host": "nqn.2016-06.io.spdk:host1", 00:18:55.151 "method": "nvmf_ns_remove_host", 00:18:55.151 "req_id": 1 00:18:55.151 } 00:18:55.151 Got JSON-RPC error response 00:18:55.151 response: 00:18:55.151 { 00:18:55.151 "code": -32602, 00:18:55.151 "message": "Invalid parameters" 00:18:55.151 } 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:55.151 [ 0]:0x2 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=310d94146eef432d9a58d66430261d54 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 310d94146eef432d9a58d66430261d54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:55.151 04:35:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:55.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=307539 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 307539 /var/tmp/host.sock 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 307539 ']' 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:55.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.718 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:55.718 [2024-11-17 04:35:34.341603] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:55.718 [2024-11-17 04:35:34.341658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307539 ] 00:18:55.718 [2024-11-17 04:35:34.433905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.718 [2024-11-17 04:35:34.456186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.979 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.979 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:55.979 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:56.239 04:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:56.239 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3da9ca25-3185-4ef9-a3d2-d2db29688370 00:18:56.239 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:56.239 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3DA9CA2531854EF9A3D2D2DB29688370 -i 00:18:56.498 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6cd51ad6-f34a-4e0b-8ef0-0ad65b708e93 00:18:56.498 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:56.499 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6CD51AD6F34A4E0B8EF00AD65B708E93 -i 00:18:56.759 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:57.020 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:57.281 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:57.281 04:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:57.542 nvme0n1 00:18:57.542 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:57.542 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:57.802 nvme1n2 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:57.803 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:58.063 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3da9ca25-3185-4ef9-a3d2-d2db29688370 == \3\d\a\9\c\a\2\5\-\3\1\8\5\-\4\e\f\9\-\a\3\d\2\-\d\2\d\b\2\9\6\8\8\3\7\0 ]] 00:18:58.063 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:58.063 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:58.063 04:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:58.324 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6cd51ad6-f34a-4e0b-8ef0-0ad65b708e93 == \6\c\d\5\1\a\d\6\-\f\3\4\a\-\4\e\0\b\-\8\e\f\0\-\0\a\d\6\5\b\7\0\8\e\9\3 ]] 00:18:58.324 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3da9ca25-3185-4ef9-a3d2-d2db29688370 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3DA9CA2531854EF9A3D2D2DB29688370 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3DA9CA2531854EF9A3D2D2DB29688370 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:58.587 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3DA9CA2531854EF9A3D2D2DB29688370 00:18:58.849 [2024-11-17 04:35:37.557032] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:58.849 [2024-11-17 04:35:37.557069] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:58.849 [2024-11-17 04:35:37.557080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.849 request: 00:18:58.849 { 00:18:58.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.849 "namespace": { 00:18:58.849 "bdev_name": "invalid", 00:18:58.849 "nsid": 1, 00:18:58.849 "nguid": "3DA9CA2531854EF9A3D2D2DB29688370", 00:18:58.849 "no_auto_visible": false 00:18:58.849 }, 00:18:58.849 "method": "nvmf_subsystem_add_ns", 00:18:58.849 "req_id": 1 00:18:58.849 } 00:18:58.849 Got JSON-RPC error response 00:18:58.849 response: 00:18:58.849 { 00:18:58.849 "code": -32602, 00:18:58.849 "message": "Invalid parameters" 00:18:58.849 } 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3da9ca25-3185-4ef9-a3d2-d2db29688370 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:58.849 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3DA9CA2531854EF9A3D2D2DB29688370 -i 00:18:59.109 04:35:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:01.021 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:01.021 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:01.021 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 307539 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 307539 ']' 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 307539 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.281 04:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307539 00:19:01.281 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.281 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.281 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307539' 00:19:01.281 killing process with pid 307539 00:19:01.281 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 307539 00:19:01.281 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 307539 00:19:01.541 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:01.801 rmmod nvme_rdma 00:19:01.801 rmmod nvme_fabrics 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 305287 ']' 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 305287 00:19:01.801 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 305287 ']' 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 305287 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305287 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305287' 00:19:02.061 killing process with pid 305287 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 305287 00:19:02.061 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 305287 00:19:02.320 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.320 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:02.320 00:19:02.320 real 0m26.141s 00:19:02.320 user 0m32.414s 00:19:02.320 sys 0m8.043s 00:19:02.320 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.320 04:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 ************************************ 00:19:02.320 END TEST nvmf_ns_masking 00:19:02.320 ************************************ 00:19:02.320 04:35:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:02.321 04:35:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:02.321 04:35:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.321 04:35:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.321 04:35:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.321 ************************************ 00:19:02.321 START TEST nvmf_nvme_cli 00:19:02.321 ************************************ 00:19:02.321 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:02.321 * Looking for test storage... 00:19:02.321 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:02.321 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.321 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.321 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.582 --rc genhtml_branch_coverage=1 00:19:02.582 --rc genhtml_function_coverage=1 00:19:02.582 --rc genhtml_legend=1 00:19:02.582 --rc geninfo_all_blocks=1 00:19:02.582 --rc geninfo_unexecuted_blocks=1 00:19:02.582 00:19:02.582 ' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.582 --rc genhtml_branch_coverage=1 00:19:02.582 --rc genhtml_function_coverage=1 00:19:02.582 --rc genhtml_legend=1 00:19:02.582 --rc geninfo_all_blocks=1 00:19:02.582 --rc geninfo_unexecuted_blocks=1 00:19:02.582 00:19:02.582 ' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.582 --rc genhtml_branch_coverage=1 00:19:02.582 --rc genhtml_function_coverage=1 00:19:02.582 --rc genhtml_legend=1 00:19:02.582 --rc geninfo_all_blocks=1 00:19:02.582 --rc geninfo_unexecuted_blocks=1 00:19:02.582 00:19:02.582 ' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.582 --rc genhtml_branch_coverage=1 00:19:02.582 --rc genhtml_function_coverage=1 00:19:02.582 --rc genhtml_legend=1 00:19:02.582 --rc geninfo_all_blocks=1 00:19:02.582 --rc geninfo_unexecuted_blocks=1 00:19:02.582 00:19:02.582 ' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.582 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.583 04:35:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:10.717 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:10.717 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:10.717 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.717 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:10.718 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:10.718 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:10.718 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:10.718 altname enp217s0f0np0 00:19:10.718 altname ens818f0np0 00:19:10.718 inet 192.168.100.8/24 scope global mlx_0_0 00:19:10.718 valid_lft forever preferred_lft forever 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:10.718 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:10.718 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:10.718 altname enp217s0f1np1 00:19:10.718 altname ens818f1np1 00:19:10.718 inet 192.168.100.9/24 scope global mlx_0_1 00:19:10.718 valid_lft forever preferred_lft forever 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:10.718 192.168.100.9' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:10.718 192.168.100.9' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:10.718 192.168.100.9' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:10.718 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=311965 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 311965 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 311965 ']' 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 [2024-11-17 04:35:48.412832] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:19:10.719 [2024-11-17 04:35:48.412885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.719 [2024-11-17 04:35:48.505500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.719 [2024-11-17 04:35:48.530120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.719 [2024-11-17 04:35:48.530159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.719 [2024-11-17 04:35:48.530169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.719 [2024-11-17 04:35:48.530177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.719 [2024-11-17 04:35:48.530184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.719 [2024-11-17 04:35:48.531754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.719 [2024-11-17 04:35:48.531875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.719 [2024-11-17 04:35:48.531983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.719 [2024-11-17 04:35:48.531984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 [2024-11-17 04:35:48.693023] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc15c50/0xc1a100) succeed. 00:19:10.719 [2024-11-17 04:35:48.702096] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc17290/0xc5b7a0) succeed. 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 Malloc0 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 Malloc1 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 [2024-11-17 04:35:48.913594] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 04:35:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:19:10.719 00:19:10.719 Discovery Log Number of Records 2, Generation counter 2 00:19:10.719 =====Discovery Log Entry 0====== 00:19:10.719 trtype: rdma 00:19:10.719 adrfam: ipv4 00:19:10.719 subtype: current discovery subsystem 00:19:10.719 treq: not required 00:19:10.719 portid: 0 00:19:10.719 trsvcid: 4420 00:19:10.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:10.719 traddr: 192.168.100.8 00:19:10.719 eflags: explicit discovery connections, duplicate discovery information 00:19:10.719 rdma_prtype: not specified 00:19:10.719 rdma_qptype: connected 00:19:10.719 rdma_cms: rdma-cm 00:19:10.719 rdma_pkey: 0x0000 00:19:10.719 =====Discovery Log Entry 1====== 00:19:10.719 trtype: rdma 00:19:10.719 adrfam: ipv4 00:19:10.719 subtype: nvme subsystem 00:19:10.719 treq: not required 00:19:10.719 portid: 0 00:19:10.719 trsvcid: 4420 00:19:10.719 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:10.719 traddr: 192.168.100.8 00:19:10.719 eflags: none 00:19:10.719 rdma_prtype: not specified 00:19:10.719 rdma_qptype: connected 00:19:10.719 rdma_cms: rdma-cm 00:19:10.719 rdma_pkey: 0x0000 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:10.719 04:35:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:11.291 04:35:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:13.203 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:13.203 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:13.203 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:13.463 /dev/nvme0n2 ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:13.463 04:35:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:14.403 rmmod nvme_rdma 00:19:14.403 rmmod nvme_fabrics 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 311965 ']' 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 311965 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 311965 ']' 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 311965 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:14.403 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.404 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311965 00:19:14.663 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.664 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.664 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311965' 00:19:14.664 killing process with pid 311965 00:19:14.664 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 311965 00:19:14.664 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 311965 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:14.924 00:19:14.924 real 0m12.490s 00:19:14.924 user 0m21.614s 00:19:14.924 sys 0m6.219s 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:14.924 ************************************ 00:19:14.924 END TEST nvmf_nvme_cli 00:19:14.924 ************************************ 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.924 ************************************ 00:19:14.924 START TEST nvmf_auth_target 00:19:14.924 ************************************ 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:14.924 * Looking for test storage... 00:19:14.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.924 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.186 --rc genhtml_branch_coverage=1 00:19:15.186 --rc genhtml_function_coverage=1 00:19:15.186 --rc genhtml_legend=1 00:19:15.186 --rc geninfo_all_blocks=1 00:19:15.186 --rc geninfo_unexecuted_blocks=1 00:19:15.186 00:19:15.186 ' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.186 --rc genhtml_branch_coverage=1 00:19:15.186 --rc genhtml_function_coverage=1 00:19:15.186 --rc genhtml_legend=1 00:19:15.186 --rc geninfo_all_blocks=1 00:19:15.186 --rc geninfo_unexecuted_blocks=1 00:19:15.186 00:19:15.186 ' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.186 --rc genhtml_branch_coverage=1 00:19:15.186 --rc genhtml_function_coverage=1 00:19:15.186 --rc genhtml_legend=1 00:19:15.186 --rc geninfo_all_blocks=1 00:19:15.186 --rc geninfo_unexecuted_blocks=1 00:19:15.186 00:19:15.186 ' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.186 --rc genhtml_branch_coverage=1 00:19:15.186 --rc genhtml_function_coverage=1 00:19:15.186 --rc genhtml_legend=1 00:19:15.186 --rc geninfo_all_blocks=1 00:19:15.186 --rc geninfo_unexecuted_blocks=1 00:19:15.186 00:19:15.186 ' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.186 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.187 04:35:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:23.341 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:23.341 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.341 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:23.342 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:23.342 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:23.342 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.342 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:23.342 altname enp217s0f0np0 00:19:23.342 altname ens818f0np0 00:19:23.342 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.342 valid_lft forever preferred_lft forever 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:23.342 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.342 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:23.342 altname enp217s0f1np1 00:19:23.342 altname ens818f1np1 00:19:23.342 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.342 valid_lft forever preferred_lft forever 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.342 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.343 192.168.100.9' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:23.343 192.168.100.9' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:23.343 192.168.100.9' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=316227 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 316227 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 316227 ']' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=316390 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c52eaeeaf4d910c4e7b9c8f2c14b3d302d91356653fd5d7c 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kIG 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c52eaeeaf4d910c4e7b9c8f2c14b3d302d91356653fd5d7c 0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c52eaeeaf4d910c4e7b9c8f2c14b3d302d91356653fd5d7c 0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c52eaeeaf4d910c4e7b9c8f2c14b3d302d91356653fd5d7c 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kIG 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kIG 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kIG 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eb4e93388043f685dab41dd9d624e311cfedc346b6b1c05ee8d533e2ded0f433 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Zyo 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eb4e93388043f685dab41dd9d624e311cfedc346b6b1c05ee8d533e2ded0f433 3 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eb4e93388043f685dab41dd9d624e311cfedc346b6b1c05ee8d533e2ded0f433 3 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eb4e93388043f685dab41dd9d624e311cfedc346b6b1c05ee8d533e2ded0f433 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Zyo 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Zyo 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Zyo 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ae5e088875d75c72ebe22a2b0dbc85f 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0EY 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ae5e088875d75c72ebe22a2b0dbc85f 1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ae5e088875d75c72ebe22a2b0dbc85f 1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.343 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ae5e088875d75c72ebe22a2b0dbc85f 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0EY 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0EY 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0EY 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=213730bb913c30865565d53147d331fc51905def09a56dbe 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BXH 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 213730bb913c30865565d53147d331fc51905def09a56dbe 2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 213730bb913c30865565d53147d331fc51905def09a56dbe 2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=213730bb913c30865565d53147d331fc51905def09a56dbe 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BXH 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BXH 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BXH 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c76f62e25f9aff082d4143a72194b0a74aa5ad10906ebb51 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZaN 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c76f62e25f9aff082d4143a72194b0a74aa5ad10906ebb51 2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c76f62e25f9aff082d4143a72194b0a74aa5ad10906ebb51 2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c76f62e25f9aff082d4143a72194b0a74aa5ad10906ebb51 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZaN 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZaN 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ZaN 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc3108bf7b1fc92ac352de8a7d653908 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7w4 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc3108bf7b1fc92ac352de8a7d653908 1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc3108bf7b1fc92ac352de8a7d653908 1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc3108bf7b1fc92ac352de8a7d653908 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7w4 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7w4 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7w4 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=621619ab3f76d9f1d7d4c8acc453ca95cb9683803395a80052157f12fe34af75 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0iP 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 621619ab3f76d9f1d7d4c8acc453ca95cb9683803395a80052157f12fe34af75 3 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 621619ab3f76d9f1d7d4c8acc453ca95cb9683803395a80052157f12fe34af75 3 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=621619ab3f76d9f1d7d4c8acc453ca95cb9683803395a80052157f12fe34af75 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0iP 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0iP 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.0iP 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 316227 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 316227 ']' 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.344 04:36:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 316390 /var/tmp/host.sock 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 316390 ']' 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.344 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:23.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:23.345 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.345 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kIG 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kIG 00:19:23.605 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kIG 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Zyo ]] 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zyo 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zyo 00:19:23.865 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zyo 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0EY 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0EY 00:19:24.126 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0EY 00:19:24.386 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BXH ]] 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BXH 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BXH 00:19:24.387 04:36:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BXH 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZaN 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZaN 00:19:24.387 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZaN 00:19:24.647 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7w4 ]] 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:19:24.648 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0iP 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0iP 00:19:24.908 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0iP 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.169 04:36:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.429 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.430 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.430 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.430 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.691 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.691 { 00:19:25.691 "cntlid": 1, 00:19:25.691 "qid": 0, 00:19:25.691 "state": "enabled", 00:19:25.691 "thread": "nvmf_tgt_poll_group_000", 00:19:25.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:25.691 "listen_address": { 00:19:25.691 "trtype": "RDMA", 00:19:25.691 "adrfam": "IPv4", 00:19:25.691 "traddr": "192.168.100.8", 00:19:25.691 "trsvcid": "4420" 00:19:25.691 }, 00:19:25.691 "peer_address": { 00:19:25.691 "trtype": "RDMA", 00:19:25.691 "adrfam": "IPv4", 00:19:25.691 "traddr": "192.168.100.8", 00:19:25.691 "trsvcid": "43116" 00:19:25.691 }, 00:19:25.691 "auth": { 00:19:25.691 "state": "completed", 00:19:25.691 "digest": "sha256", 00:19:25.691 "dhgroup": "null" 00:19:25.691 } 00:19:25.691 } 00:19:25.691 ]' 00:19:25.691 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.951 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.211 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:26.211 04:36:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:29.507 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.507 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.508 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.769 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.030 00:19:30.030 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.030 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.030 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.291 { 00:19:30.291 "cntlid": 3, 00:19:30.291 "qid": 0, 00:19:30.291 "state": "enabled", 00:19:30.291 "thread": "nvmf_tgt_poll_group_000", 00:19:30.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:30.291 "listen_address": { 00:19:30.291 "trtype": "RDMA", 00:19:30.291 "adrfam": "IPv4", 00:19:30.291 "traddr": "192.168.100.8", 00:19:30.291 "trsvcid": "4420" 00:19:30.291 }, 00:19:30.291 "peer_address": { 00:19:30.291 "trtype": "RDMA", 00:19:30.291 "adrfam": "IPv4", 00:19:30.291 "traddr": "192.168.100.8", 00:19:30.291 "trsvcid": "49483" 00:19:30.291 }, 00:19:30.291 "auth": { 00:19:30.291 "state": "completed", 00:19:30.291 "digest": "sha256", 00:19:30.291 "dhgroup": "null" 00:19:30.291 } 00:19:30.291 } 00:19:30.291 ]' 00:19:30.291 04:36:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.291 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.564 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:30.564 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:31.139 04:36:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.400 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.660 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.660 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.920 { 00:19:31.920 "cntlid": 5, 00:19:31.920 "qid": 0, 00:19:31.920 "state": "enabled", 00:19:31.920 "thread": "nvmf_tgt_poll_group_000", 00:19:31.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:31.920 "listen_address": { 00:19:31.920 "trtype": "RDMA", 00:19:31.920 "adrfam": "IPv4", 00:19:31.920 "traddr": "192.168.100.8", 00:19:31.920 "trsvcid": "4420" 00:19:31.920 }, 00:19:31.920 "peer_address": { 00:19:31.920 "trtype": "RDMA", 00:19:31.920 "adrfam": "IPv4", 00:19:31.920 "traddr": "192.168.100.8", 00:19:31.920 "trsvcid": "41401" 00:19:31.920 }, 00:19:31.920 "auth": { 00:19:31.920 "state": "completed", 00:19:31.920 "digest": "sha256", 00:19:31.920 "dhgroup": "null" 00:19:31.920 } 00:19:31.920 } 00:19:31.920 ]' 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.920 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.180 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.180 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.180 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.180 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.180 04:36:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.440 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:32.440 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.011 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.271 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.272 04:36:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.531 00:19:33.531 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.531 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.531 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.792 { 00:19:33.792 "cntlid": 7, 00:19:33.792 "qid": 0, 00:19:33.792 "state": "enabled", 00:19:33.792 "thread": "nvmf_tgt_poll_group_000", 00:19:33.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:33.792 "listen_address": { 00:19:33.792 "trtype": "RDMA", 00:19:33.792 "adrfam": "IPv4", 00:19:33.792 "traddr": "192.168.100.8", 00:19:33.792 "trsvcid": "4420" 00:19:33.792 }, 00:19:33.792 "peer_address": { 00:19:33.792 "trtype": "RDMA", 00:19:33.792 "adrfam": "IPv4", 00:19:33.792 "traddr": "192.168.100.8", 00:19:33.792 "trsvcid": "43576" 00:19:33.792 }, 00:19:33.792 "auth": { 00:19:33.792 "state": "completed", 00:19:33.792 "digest": "sha256", 00:19:33.792 "dhgroup": "null" 00:19:33.792 } 00:19:33.792 } 00:19:33.792 ]' 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.792 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.052 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:34.052 04:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:34.622 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.882 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.143 00:19:35.143 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.143 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.143 04:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.403 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.403 { 00:19:35.403 "cntlid": 9, 00:19:35.403 "qid": 0, 00:19:35.403 "state": "enabled", 00:19:35.404 "thread": "nvmf_tgt_poll_group_000", 00:19:35.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:35.404 "listen_address": { 00:19:35.404 "trtype": "RDMA", 00:19:35.404 "adrfam": "IPv4", 00:19:35.404 "traddr": "192.168.100.8", 00:19:35.404 "trsvcid": "4420" 00:19:35.404 }, 00:19:35.404 "peer_address": { 00:19:35.404 "trtype": "RDMA", 00:19:35.404 "adrfam": "IPv4", 00:19:35.404 "traddr": "192.168.100.8", 00:19:35.404 "trsvcid": "54020" 00:19:35.404 }, 00:19:35.404 "auth": { 00:19:35.404 "state": "completed", 00:19:35.404 "digest": "sha256", 00:19:35.404 "dhgroup": "ffdhe2048" 00:19:35.404 } 00:19:35.404 } 00:19:35.404 ]' 00:19:35.404 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.404 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.404 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.664 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.664 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.664 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.664 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.664 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.001 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:36.001 04:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.615 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.892 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.892 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.195 { 00:19:37.195 "cntlid": 11, 00:19:37.195 "qid": 0, 00:19:37.195 "state": "enabled", 00:19:37.195 "thread": "nvmf_tgt_poll_group_000", 00:19:37.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:37.195 "listen_address": { 00:19:37.195 "trtype": "RDMA", 00:19:37.195 "adrfam": "IPv4", 00:19:37.195 "traddr": "192.168.100.8", 00:19:37.195 "trsvcid": "4420" 00:19:37.195 }, 00:19:37.195 "peer_address": { 00:19:37.195 "trtype": "RDMA", 00:19:37.195 "adrfam": "IPv4", 00:19:37.195 "traddr": "192.168.100.8", 00:19:37.195 "trsvcid": "33530" 00:19:37.195 }, 00:19:37.195 "auth": { 00:19:37.195 "state": "completed", 00:19:37.195 "digest": "sha256", 00:19:37.195 "dhgroup": "ffdhe2048" 00:19:37.195 } 00:19:37.195 } 00:19:37.195 ]' 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.195 04:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:37.456 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.396 04:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.396 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.656 00:19:38.656 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.656 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.656 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.916 { 00:19:38.916 "cntlid": 13, 00:19:38.916 "qid": 0, 00:19:38.916 "state": "enabled", 00:19:38.916 "thread": "nvmf_tgt_poll_group_000", 00:19:38.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:38.916 "listen_address": { 00:19:38.916 "trtype": "RDMA", 00:19:38.916 "adrfam": "IPv4", 00:19:38.916 "traddr": "192.168.100.8", 00:19:38.916 "trsvcid": "4420" 00:19:38.916 }, 00:19:38.916 "peer_address": { 00:19:38.916 "trtype": "RDMA", 00:19:38.916 "adrfam": "IPv4", 00:19:38.916 "traddr": "192.168.100.8", 00:19:38.916 "trsvcid": "40816" 00:19:38.916 }, 00:19:38.916 "auth": { 00:19:38.916 "state": "completed", 00:19:38.916 "digest": "sha256", 00:19:38.916 "dhgroup": "ffdhe2048" 00:19:38.916 } 00:19:38.916 } 00:19:38.916 ]' 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.916 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:39.176 04:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.116 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.117 04:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.376 00:19:40.376 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.376 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.376 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.636 { 00:19:40.636 "cntlid": 15, 00:19:40.636 "qid": 0, 00:19:40.636 "state": "enabled", 00:19:40.636 "thread": "nvmf_tgt_poll_group_000", 00:19:40.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:40.636 "listen_address": { 00:19:40.636 "trtype": "RDMA", 00:19:40.636 "adrfam": "IPv4", 00:19:40.636 "traddr": "192.168.100.8", 00:19:40.636 "trsvcid": "4420" 00:19:40.636 }, 00:19:40.636 "peer_address": { 00:19:40.636 "trtype": "RDMA", 00:19:40.636 "adrfam": "IPv4", 00:19:40.636 "traddr": "192.168.100.8", 00:19:40.636 "trsvcid": "60410" 00:19:40.636 }, 00:19:40.636 "auth": { 00:19:40.636 "state": "completed", 00:19:40.636 "digest": "sha256", 00:19:40.636 "dhgroup": "ffdhe2048" 00:19:40.636 } 00:19:40.636 } 00:19:40.636 ]' 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.636 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.895 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.895 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.895 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.895 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.895 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.155 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:41.155 04:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.726 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.996 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:41.996 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.996 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.996 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.997 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.261 00:19:42.261 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.261 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.261 04:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.522 { 00:19:42.522 "cntlid": 17, 00:19:42.522 "qid": 0, 00:19:42.522 "state": "enabled", 00:19:42.522 "thread": "nvmf_tgt_poll_group_000", 00:19:42.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:42.522 "listen_address": { 00:19:42.522 "trtype": "RDMA", 00:19:42.522 "adrfam": "IPv4", 00:19:42.522 "traddr": "192.168.100.8", 00:19:42.522 "trsvcid": "4420" 00:19:42.522 }, 00:19:42.522 "peer_address": { 00:19:42.522 "trtype": "RDMA", 00:19:42.522 "adrfam": "IPv4", 00:19:42.522 "traddr": "192.168.100.8", 00:19:42.522 "trsvcid": "42481" 00:19:42.522 }, 00:19:42.522 "auth": { 00:19:42.522 "state": "completed", 00:19:42.522 "digest": "sha256", 00:19:42.522 "dhgroup": "ffdhe3072" 00:19:42.522 } 00:19:42.522 } 00:19:42.522 ]' 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.522 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.795 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:42.796 04:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:43.368 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.628 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.629 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.889 00:19:43.889 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.889 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.889 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.149 { 00:19:44.149 "cntlid": 19, 00:19:44.149 "qid": 0, 00:19:44.149 "state": "enabled", 00:19:44.149 "thread": "nvmf_tgt_poll_group_000", 00:19:44.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:44.149 "listen_address": { 00:19:44.149 "trtype": "RDMA", 00:19:44.149 "adrfam": "IPv4", 00:19:44.149 "traddr": "192.168.100.8", 00:19:44.149 "trsvcid": "4420" 00:19:44.149 }, 00:19:44.149 "peer_address": { 00:19:44.149 "trtype": "RDMA", 00:19:44.149 "adrfam": "IPv4", 00:19:44.149 "traddr": "192.168.100.8", 00:19:44.149 "trsvcid": "55491" 00:19:44.149 }, 00:19:44.149 "auth": { 00:19:44.149 "state": "completed", 00:19:44.149 "digest": "sha256", 00:19:44.149 "dhgroup": "ffdhe3072" 00:19:44.149 } 00:19:44.149 } 00:19:44.149 ]' 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.149 04:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.409 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.409 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.409 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.409 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.409 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.670 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:44.670 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.240 04:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.500 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.760 00:19:45.760 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.760 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.760 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.021 { 00:19:46.021 "cntlid": 21, 00:19:46.021 "qid": 0, 00:19:46.021 "state": "enabled", 00:19:46.021 "thread": "nvmf_tgt_poll_group_000", 00:19:46.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:46.021 "listen_address": { 00:19:46.021 "trtype": "RDMA", 00:19:46.021 "adrfam": "IPv4", 00:19:46.021 "traddr": "192.168.100.8", 00:19:46.021 "trsvcid": "4420" 00:19:46.021 }, 00:19:46.021 "peer_address": { 00:19:46.021 "trtype": "RDMA", 00:19:46.021 "adrfam": "IPv4", 00:19:46.021 "traddr": "192.168.100.8", 00:19:46.021 "trsvcid": "37817" 00:19:46.021 }, 00:19:46.021 "auth": { 00:19:46.021 "state": "completed", 00:19:46.021 "digest": "sha256", 00:19:46.021 "dhgroup": "ffdhe3072" 00:19:46.021 } 00:19:46.021 } 00:19:46.021 ]' 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.021 04:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.280 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:46.280 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:46.849 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.110 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.370 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:47.370 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.370 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.370 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.370 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.371 04:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.371 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.631 { 00:19:47.631 "cntlid": 23, 00:19:47.631 "qid": 0, 00:19:47.631 "state": "enabled", 00:19:47.631 "thread": "nvmf_tgt_poll_group_000", 00:19:47.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:47.631 "listen_address": { 00:19:47.631 "trtype": "RDMA", 00:19:47.631 "adrfam": "IPv4", 00:19:47.631 "traddr": "192.168.100.8", 00:19:47.631 "trsvcid": "4420" 00:19:47.631 }, 00:19:47.631 "peer_address": { 00:19:47.631 "trtype": "RDMA", 00:19:47.631 "adrfam": "IPv4", 00:19:47.631 "traddr": "192.168.100.8", 00:19:47.631 "trsvcid": "54565" 00:19:47.631 }, 00:19:47.631 "auth": { 00:19:47.631 "state": "completed", 00:19:47.631 "digest": "sha256", 00:19:47.631 "dhgroup": "ffdhe3072" 00:19:47.631 } 00:19:47.631 } 00:19:47.631 ]' 00:19:47.631 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.892 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.152 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:48.152 04:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.722 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.982 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.983 04:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.243 00:19:49.243 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.243 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.243 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.503 { 00:19:49.503 "cntlid": 25, 00:19:49.503 "qid": 0, 00:19:49.503 "state": "enabled", 00:19:49.503 "thread": "nvmf_tgt_poll_group_000", 00:19:49.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:49.503 "listen_address": { 00:19:49.503 "trtype": "RDMA", 00:19:49.503 "adrfam": "IPv4", 00:19:49.503 "traddr": "192.168.100.8", 00:19:49.503 "trsvcid": "4420" 00:19:49.503 }, 00:19:49.503 "peer_address": { 00:19:49.503 "trtype": "RDMA", 00:19:49.503 "adrfam": "IPv4", 00:19:49.503 "traddr": "192.168.100.8", 00:19:49.503 "trsvcid": "48984" 00:19:49.503 }, 00:19:49.503 "auth": { 00:19:49.503 "state": "completed", 00:19:49.503 "digest": "sha256", 00:19:49.503 "dhgroup": "ffdhe4096" 00:19:49.503 } 00:19:49.503 } 00:19:49.503 ]' 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.503 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.763 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.763 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.763 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.763 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:49.763 04:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.704 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.963 00:19:50.963 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.963 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.963 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.223 { 00:19:51.223 "cntlid": 27, 00:19:51.223 "qid": 0, 00:19:51.223 "state": "enabled", 00:19:51.223 "thread": "nvmf_tgt_poll_group_000", 00:19:51.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:51.223 "listen_address": { 00:19:51.223 "trtype": "RDMA", 00:19:51.223 "adrfam": "IPv4", 00:19:51.223 "traddr": "192.168.100.8", 00:19:51.223 "trsvcid": "4420" 00:19:51.223 }, 00:19:51.223 "peer_address": { 00:19:51.223 "trtype": "RDMA", 00:19:51.223 "adrfam": "IPv4", 00:19:51.223 "traddr": "192.168.100.8", 00:19:51.223 "trsvcid": "56473" 00:19:51.223 }, 00:19:51.223 "auth": { 00:19:51.223 "state": "completed", 00:19:51.223 "digest": "sha256", 00:19:51.223 "dhgroup": "ffdhe4096" 00:19:51.223 } 00:19:51.223 } 00:19:51.223 ]' 00:19:51.223 04:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.223 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.223 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.481 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.481 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.481 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.481 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.481 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.740 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:51.740 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:52.309 04:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.309 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:52.309 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.309 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.310 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.310 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.310 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.569 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.570 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.830 00:19:52.830 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.830 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.830 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.090 { 00:19:53.090 "cntlid": 29, 00:19:53.090 "qid": 0, 00:19:53.090 "state": "enabled", 00:19:53.090 "thread": "nvmf_tgt_poll_group_000", 00:19:53.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:53.090 "listen_address": { 00:19:53.090 "trtype": "RDMA", 00:19:53.090 "adrfam": "IPv4", 00:19:53.090 "traddr": "192.168.100.8", 00:19:53.090 "trsvcid": "4420" 00:19:53.090 }, 00:19:53.090 "peer_address": { 00:19:53.090 "trtype": "RDMA", 00:19:53.090 "adrfam": "IPv4", 00:19:53.090 "traddr": "192.168.100.8", 00:19:53.090 "trsvcid": "60524" 00:19:53.090 }, 00:19:53.090 "auth": { 00:19:53.090 "state": "completed", 00:19:53.090 "digest": "sha256", 00:19:53.090 "dhgroup": "ffdhe4096" 00:19:53.090 } 00:19:53.090 } 00:19:53.090 ]' 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.090 04:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.350 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:53.350 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:19:53.920 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.180 04:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.180 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.440 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.440 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.440 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.440 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.701 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.701 { 00:19:54.701 "cntlid": 31, 00:19:54.701 "qid": 0, 00:19:54.701 "state": "enabled", 00:19:54.701 "thread": "nvmf_tgt_poll_group_000", 00:19:54.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:54.701 "listen_address": { 00:19:54.701 "trtype": "RDMA", 00:19:54.701 "adrfam": "IPv4", 00:19:54.701 "traddr": "192.168.100.8", 00:19:54.701 "trsvcid": "4420" 00:19:54.701 }, 00:19:54.701 "peer_address": { 00:19:54.701 "trtype": "RDMA", 00:19:54.701 "adrfam": "IPv4", 00:19:54.701 "traddr": "192.168.100.8", 00:19:54.701 "trsvcid": "58788" 00:19:54.701 }, 00:19:54.701 "auth": { 00:19:54.701 "state": "completed", 00:19:54.701 "digest": "sha256", 00:19:54.701 "dhgroup": "ffdhe4096" 00:19:54.701 } 00:19:54.701 } 00:19:54.701 ]' 00:19:54.701 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.961 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.222 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:55.222 04:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.792 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.053 04:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.313 00:19:56.313 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.313 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.313 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.573 { 00:19:56.573 "cntlid": 33, 00:19:56.573 "qid": 0, 00:19:56.573 "state": "enabled", 00:19:56.573 "thread": "nvmf_tgt_poll_group_000", 00:19:56.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:56.573 "listen_address": { 00:19:56.573 "trtype": "RDMA", 00:19:56.573 "adrfam": "IPv4", 00:19:56.573 "traddr": "192.168.100.8", 00:19:56.573 "trsvcid": "4420" 00:19:56.573 }, 00:19:56.573 "peer_address": { 00:19:56.573 "trtype": "RDMA", 00:19:56.573 "adrfam": "IPv4", 00:19:56.573 "traddr": "192.168.100.8", 00:19:56.573 "trsvcid": "58662" 00:19:56.573 }, 00:19:56.573 "auth": { 00:19:56.573 "state": "completed", 00:19:56.573 "digest": "sha256", 00:19:56.573 "dhgroup": "ffdhe6144" 00:19:56.573 } 00:19:56.573 } 00:19:56.573 ]' 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.573 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:56.833 04:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.803 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.804 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.374 00:19:58.374 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.374 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.374 04:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.374 { 00:19:58.374 "cntlid": 35, 00:19:58.374 "qid": 0, 00:19:58.374 "state": "enabled", 00:19:58.374 "thread": "nvmf_tgt_poll_group_000", 00:19:58.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:58.374 "listen_address": { 00:19:58.374 "trtype": "RDMA", 00:19:58.374 "adrfam": "IPv4", 00:19:58.374 "traddr": "192.168.100.8", 00:19:58.374 "trsvcid": "4420" 00:19:58.374 }, 00:19:58.374 "peer_address": { 00:19:58.374 "trtype": "RDMA", 00:19:58.374 "adrfam": "IPv4", 00:19:58.374 "traddr": "192.168.100.8", 00:19:58.374 "trsvcid": "46382" 00:19:58.374 }, 00:19:58.374 "auth": { 00:19:58.374 "state": "completed", 00:19:58.374 "digest": "sha256", 00:19:58.374 "dhgroup": "ffdhe6144" 00:19:58.374 } 00:19:58.374 } 00:19:58.374 ]' 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.374 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.633 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.633 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.633 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.633 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.634 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.893 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:58.893 04:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.463 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.723 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.983 00:19:59.983 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.983 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.983 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.243 { 00:20:00.243 "cntlid": 37, 00:20:00.243 "qid": 0, 00:20:00.243 "state": "enabled", 00:20:00.243 "thread": "nvmf_tgt_poll_group_000", 00:20:00.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:00.243 "listen_address": { 00:20:00.243 "trtype": "RDMA", 00:20:00.243 "adrfam": "IPv4", 00:20:00.243 "traddr": "192.168.100.8", 00:20:00.243 "trsvcid": "4420" 00:20:00.243 }, 00:20:00.243 "peer_address": { 00:20:00.243 "trtype": "RDMA", 00:20:00.243 "adrfam": "IPv4", 00:20:00.243 "traddr": "192.168.100.8", 00:20:00.243 "trsvcid": "50962" 00:20:00.243 }, 00:20:00.243 "auth": { 00:20:00.243 "state": "completed", 00:20:00.243 "digest": "sha256", 00:20:00.243 "dhgroup": "ffdhe6144" 00:20:00.243 } 00:20:00.243 } 00:20:00.243 ]' 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.243 04:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.243 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.243 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.243 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.243 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.243 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.503 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:00.503 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:01.071 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.331 04:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.591 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:01.591 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.591 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.592 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.852 00:20:01.852 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.852 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.852 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.112 { 00:20:02.112 "cntlid": 39, 00:20:02.112 "qid": 0, 00:20:02.112 "state": "enabled", 00:20:02.112 "thread": "nvmf_tgt_poll_group_000", 00:20:02.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:02.112 "listen_address": { 00:20:02.112 "trtype": "RDMA", 00:20:02.112 "adrfam": "IPv4", 00:20:02.112 "traddr": "192.168.100.8", 00:20:02.112 "trsvcid": "4420" 00:20:02.112 }, 00:20:02.112 "peer_address": { 00:20:02.112 "trtype": "RDMA", 00:20:02.112 "adrfam": "IPv4", 00:20:02.112 "traddr": "192.168.100.8", 00:20:02.112 "trsvcid": "60917" 00:20:02.112 }, 00:20:02.112 "auth": { 00:20:02.112 "state": "completed", 00:20:02.112 "digest": "sha256", 00:20:02.112 "dhgroup": "ffdhe6144" 00:20:02.112 } 00:20:02.112 } 00:20:02.112 ]' 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.112 04:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.372 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:02.372 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:02.942 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.202 04:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.461 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.462 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.462 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.462 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.462 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.462 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.721 00:20:03.721 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.721 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.721 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.981 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.981 { 00:20:03.981 "cntlid": 41, 00:20:03.981 "qid": 0, 00:20:03.981 "state": "enabled", 00:20:03.981 "thread": "nvmf_tgt_poll_group_000", 00:20:03.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:03.981 "listen_address": { 00:20:03.981 "trtype": "RDMA", 00:20:03.981 "adrfam": "IPv4", 00:20:03.981 "traddr": "192.168.100.8", 00:20:03.981 "trsvcid": "4420" 00:20:03.981 }, 00:20:03.981 "peer_address": { 00:20:03.981 "trtype": "RDMA", 00:20:03.981 "adrfam": "IPv4", 00:20:03.982 "traddr": "192.168.100.8", 00:20:03.982 "trsvcid": "57919" 00:20:03.982 }, 00:20:03.982 "auth": { 00:20:03.982 "state": "completed", 00:20:03.982 "digest": "sha256", 00:20:03.982 "dhgroup": "ffdhe8192" 00:20:03.982 } 00:20:03.982 } 00:20:03.982 ]' 00:20:03.982 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.982 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.982 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.242 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.242 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.242 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.242 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.242 04:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.242 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:04.242 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.182 04:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.182 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.442 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.442 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.442 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.442 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.702 00:20:05.702 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.702 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.702 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.962 { 00:20:05.962 "cntlid": 43, 00:20:05.962 "qid": 0, 00:20:05.962 "state": "enabled", 00:20:05.962 "thread": "nvmf_tgt_poll_group_000", 00:20:05.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:05.962 "listen_address": { 00:20:05.962 "trtype": "RDMA", 00:20:05.962 "adrfam": "IPv4", 00:20:05.962 "traddr": "192.168.100.8", 00:20:05.962 "trsvcid": "4420" 00:20:05.962 }, 00:20:05.962 "peer_address": { 00:20:05.962 "trtype": "RDMA", 00:20:05.962 "adrfam": "IPv4", 00:20:05.962 "traddr": "192.168.100.8", 00:20:05.962 "trsvcid": "54740" 00:20:05.962 }, 00:20:05.962 "auth": { 00:20:05.962 "state": "completed", 00:20:05.962 "digest": "sha256", 00:20:05.962 "dhgroup": "ffdhe8192" 00:20:05.962 } 00:20:05.962 } 00:20:05.962 ]' 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.962 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.222 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.222 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.222 04:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.222 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:06.222 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:07.161 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.162 04:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.731 00:20:07.731 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.731 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.732 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.991 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.992 { 00:20:07.992 "cntlid": 45, 00:20:07.992 "qid": 0, 00:20:07.992 "state": "enabled", 00:20:07.992 "thread": "nvmf_tgt_poll_group_000", 00:20:07.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:07.992 "listen_address": { 00:20:07.992 "trtype": "RDMA", 00:20:07.992 "adrfam": "IPv4", 00:20:07.992 "traddr": "192.168.100.8", 00:20:07.992 "trsvcid": "4420" 00:20:07.992 }, 00:20:07.992 "peer_address": { 00:20:07.992 "trtype": "RDMA", 00:20:07.992 "adrfam": "IPv4", 00:20:07.992 "traddr": "192.168.100.8", 00:20:07.992 "trsvcid": "48663" 00:20:07.992 }, 00:20:07.992 "auth": { 00:20:07.992 "state": "completed", 00:20:07.992 "digest": "sha256", 00:20:07.992 "dhgroup": "ffdhe8192" 00:20:07.992 } 00:20:07.992 } 00:20:07.992 ]' 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.992 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.252 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:08.252 04:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:08.820 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.080 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.340 04:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.599 00:20:09.599 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.599 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.599 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.858 { 00:20:09.858 "cntlid": 47, 00:20:09.858 "qid": 0, 00:20:09.858 "state": "enabled", 00:20:09.858 "thread": "nvmf_tgt_poll_group_000", 00:20:09.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:09.858 "listen_address": { 00:20:09.858 "trtype": "RDMA", 00:20:09.858 "adrfam": "IPv4", 00:20:09.858 "traddr": "192.168.100.8", 00:20:09.858 "trsvcid": "4420" 00:20:09.858 }, 00:20:09.858 "peer_address": { 00:20:09.858 "trtype": "RDMA", 00:20:09.858 "adrfam": "IPv4", 00:20:09.858 "traddr": "192.168.100.8", 00:20:09.858 "trsvcid": "43192" 00:20:09.858 }, 00:20:09.858 "auth": { 00:20:09.858 "state": "completed", 00:20:09.858 "digest": "sha256", 00:20:09.858 "dhgroup": "ffdhe8192" 00:20:09.858 } 00:20:09.858 } 00:20:09.858 ]' 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.858 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.117 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.117 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.117 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.117 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.117 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.377 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:10.377 04:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.948 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.208 04:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.468 00:20:11.468 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.468 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.468 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.730 { 00:20:11.730 "cntlid": 49, 00:20:11.730 "qid": 0, 00:20:11.730 "state": "enabled", 00:20:11.730 "thread": "nvmf_tgt_poll_group_000", 00:20:11.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:11.730 "listen_address": { 00:20:11.730 "trtype": "RDMA", 00:20:11.730 "adrfam": "IPv4", 00:20:11.730 "traddr": "192.168.100.8", 00:20:11.730 "trsvcid": "4420" 00:20:11.730 }, 00:20:11.730 "peer_address": { 00:20:11.730 "trtype": "RDMA", 00:20:11.730 "adrfam": "IPv4", 00:20:11.730 "traddr": "192.168.100.8", 00:20:11.730 "trsvcid": "54824" 00:20:11.730 }, 00:20:11.730 "auth": { 00:20:11.730 "state": "completed", 00:20:11.730 "digest": "sha384", 00:20:11.730 "dhgroup": "null" 00:20:11.730 } 00:20:11.730 } 00:20:11.730 ]' 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.730 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.990 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:11.990 04:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:12.560 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.819 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.078 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.079 00:20:13.339 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.339 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.339 04:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.339 { 00:20:13.339 "cntlid": 51, 00:20:13.339 "qid": 0, 00:20:13.339 "state": "enabled", 00:20:13.339 "thread": "nvmf_tgt_poll_group_000", 00:20:13.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:13.339 "listen_address": { 00:20:13.339 "trtype": "RDMA", 00:20:13.339 "adrfam": "IPv4", 00:20:13.339 "traddr": "192.168.100.8", 00:20:13.339 "trsvcid": "4420" 00:20:13.339 }, 00:20:13.339 "peer_address": { 00:20:13.339 "trtype": "RDMA", 00:20:13.339 "adrfam": "IPv4", 00:20:13.339 "traddr": "192.168.100.8", 00:20:13.339 "trsvcid": "59586" 00:20:13.339 }, 00:20:13.339 "auth": { 00:20:13.339 "state": "completed", 00:20:13.339 "digest": "sha384", 00:20:13.339 "dhgroup": "null" 00:20:13.339 } 00:20:13.339 } 00:20:13.339 ]' 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.339 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.599 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.599 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.599 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.599 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.599 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.859 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:13.859 04:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.430 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.690 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.950 00:20:14.950 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.950 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.950 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.210 { 00:20:15.210 "cntlid": 53, 00:20:15.210 "qid": 0, 00:20:15.210 "state": "enabled", 00:20:15.210 "thread": "nvmf_tgt_poll_group_000", 00:20:15.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:15.210 "listen_address": { 00:20:15.210 "trtype": "RDMA", 00:20:15.210 "adrfam": "IPv4", 00:20:15.210 "traddr": "192.168.100.8", 00:20:15.210 "trsvcid": "4420" 00:20:15.210 }, 00:20:15.210 "peer_address": { 00:20:15.210 "trtype": "RDMA", 00:20:15.210 "adrfam": "IPv4", 00:20:15.210 "traddr": "192.168.100.8", 00:20:15.210 "trsvcid": "53461" 00:20:15.210 }, 00:20:15.210 "auth": { 00:20:15.210 "state": "completed", 00:20:15.210 "digest": "sha384", 00:20:15.210 "dhgroup": "null" 00:20:15.210 } 00:20:15.210 } 00:20:15.210 ]' 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.210 04:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.470 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:15.470 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:16.041 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.301 04:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.301 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.560 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.560 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.560 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.561 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.561 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.820 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.820 { 00:20:16.820 "cntlid": 55, 00:20:16.820 "qid": 0, 00:20:16.820 "state": "enabled", 00:20:16.820 "thread": "nvmf_tgt_poll_group_000", 00:20:16.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:16.821 "listen_address": { 00:20:16.821 "trtype": "RDMA", 00:20:16.821 "adrfam": "IPv4", 00:20:16.821 "traddr": "192.168.100.8", 00:20:16.821 "trsvcid": "4420" 00:20:16.821 }, 00:20:16.821 "peer_address": { 00:20:16.821 "trtype": "RDMA", 00:20:16.821 "adrfam": "IPv4", 00:20:16.821 "traddr": "192.168.100.8", 00:20:16.821 "trsvcid": "42037" 00:20:16.821 }, 00:20:16.821 "auth": { 00:20:16.821 "state": "completed", 00:20:16.821 "digest": "sha384", 00:20:16.821 "dhgroup": "null" 00:20:16.821 } 00:20:16.821 } 00:20:16.821 ]' 00:20:16.821 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.080 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.340 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:17.340 04:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.910 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.170 04:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.430 00:20:18.430 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.430 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.430 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.690 { 00:20:18.690 "cntlid": 57, 00:20:18.690 "qid": 0, 00:20:18.690 "state": "enabled", 00:20:18.690 "thread": "nvmf_tgt_poll_group_000", 00:20:18.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:18.690 "listen_address": { 00:20:18.690 "trtype": "RDMA", 00:20:18.690 "adrfam": "IPv4", 00:20:18.690 "traddr": "192.168.100.8", 00:20:18.690 "trsvcid": "4420" 00:20:18.690 }, 00:20:18.690 "peer_address": { 00:20:18.690 "trtype": "RDMA", 00:20:18.690 "adrfam": "IPv4", 00:20:18.690 "traddr": "192.168.100.8", 00:20:18.690 "trsvcid": "40672" 00:20:18.690 }, 00:20:18.690 "auth": { 00:20:18.690 "state": "completed", 00:20:18.690 "digest": "sha384", 00:20:18.690 "dhgroup": "ffdhe2048" 00:20:18.690 } 00:20:18.690 } 00:20:18.690 ]' 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.690 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.950 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:18.951 04:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:19.520 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.779 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.048 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.048 00:20:20.309 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.309 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.309 04:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.309 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.309 { 00:20:20.309 "cntlid": 59, 00:20:20.309 "qid": 0, 00:20:20.309 "state": "enabled", 00:20:20.309 "thread": "nvmf_tgt_poll_group_000", 00:20:20.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:20.309 "listen_address": { 00:20:20.309 "trtype": "RDMA", 00:20:20.309 "adrfam": "IPv4", 00:20:20.309 "traddr": "192.168.100.8", 00:20:20.309 "trsvcid": "4420" 00:20:20.309 }, 00:20:20.309 "peer_address": { 00:20:20.309 "trtype": "RDMA", 00:20:20.309 "adrfam": "IPv4", 00:20:20.309 "traddr": "192.168.100.8", 00:20:20.309 "trsvcid": "37763" 00:20:20.309 }, 00:20:20.309 "auth": { 00:20:20.310 "state": "completed", 00:20:20.310 "digest": "sha384", 00:20:20.310 "dhgroup": "ffdhe2048" 00:20:20.310 } 00:20:20.310 } 00:20:20.310 ]' 00:20:20.310 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.578 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.838 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:20.838 04:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.407 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.666 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.925 00:20:21.925 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.925 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.925 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.185 { 00:20:22.185 "cntlid": 61, 00:20:22.185 "qid": 0, 00:20:22.185 "state": "enabled", 00:20:22.185 "thread": "nvmf_tgt_poll_group_000", 00:20:22.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:22.185 "listen_address": { 00:20:22.185 "trtype": "RDMA", 00:20:22.185 "adrfam": "IPv4", 00:20:22.185 "traddr": "192.168.100.8", 00:20:22.185 "trsvcid": "4420" 00:20:22.185 }, 00:20:22.185 "peer_address": { 00:20:22.185 "trtype": "RDMA", 00:20:22.185 "adrfam": "IPv4", 00:20:22.185 "traddr": "192.168.100.8", 00:20:22.185 "trsvcid": "33010" 00:20:22.185 }, 00:20:22.185 "auth": { 00:20:22.185 "state": "completed", 00:20:22.185 "digest": "sha384", 00:20:22.185 "dhgroup": "ffdhe2048" 00:20:22.185 } 00:20:22.185 } 00:20:22.185 ]' 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.185 04:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.445 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:22.445 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:23.013 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.273 04:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.533 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.533 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.793 { 00:20:23.793 "cntlid": 63, 00:20:23.793 "qid": 0, 00:20:23.793 "state": "enabled", 00:20:23.793 "thread": "nvmf_tgt_poll_group_000", 00:20:23.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:23.793 "listen_address": { 00:20:23.793 "trtype": "RDMA", 00:20:23.793 "adrfam": "IPv4", 00:20:23.793 "traddr": "192.168.100.8", 00:20:23.793 "trsvcid": "4420" 00:20:23.793 }, 00:20:23.793 "peer_address": { 00:20:23.793 "trtype": "RDMA", 00:20:23.793 "adrfam": "IPv4", 00:20:23.793 "traddr": "192.168.100.8", 00:20:23.793 "trsvcid": "43088" 00:20:23.793 }, 00:20:23.793 "auth": { 00:20:23.793 "state": "completed", 00:20:23.793 "digest": "sha384", 00:20:23.793 "dhgroup": "ffdhe2048" 00:20:23.793 } 00:20:23.793 } 00:20:23.793 ]' 00:20:23.793 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.053 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.312 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:24.312 04:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.881 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.141 04:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.401 00:20:25.401 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.401 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.401 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.661 { 00:20:25.661 "cntlid": 65, 00:20:25.661 "qid": 0, 00:20:25.661 "state": "enabled", 00:20:25.661 "thread": "nvmf_tgt_poll_group_000", 00:20:25.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:25.661 "listen_address": { 00:20:25.661 "trtype": "RDMA", 00:20:25.661 "adrfam": "IPv4", 00:20:25.661 "traddr": "192.168.100.8", 00:20:25.661 "trsvcid": "4420" 00:20:25.661 }, 00:20:25.661 "peer_address": { 00:20:25.661 "trtype": "RDMA", 00:20:25.661 "adrfam": "IPv4", 00:20:25.661 "traddr": "192.168.100.8", 00:20:25.661 "trsvcid": "43138" 00:20:25.661 }, 00:20:25.661 "auth": { 00:20:25.661 "state": "completed", 00:20:25.661 "digest": "sha384", 00:20:25.661 "dhgroup": "ffdhe3072" 00:20:25.661 } 00:20:25.661 } 00:20:25.661 ]' 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.661 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.921 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.921 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.921 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.921 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:25.921 04:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:26.490 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.751 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.011 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.271 00:20:27.271 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.271 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.271 04:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.529 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.529 { 00:20:27.529 "cntlid": 67, 00:20:27.529 "qid": 0, 00:20:27.529 "state": "enabled", 00:20:27.529 "thread": "nvmf_tgt_poll_group_000", 00:20:27.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:27.529 "listen_address": { 00:20:27.529 "trtype": "RDMA", 00:20:27.529 "adrfam": "IPv4", 00:20:27.529 "traddr": "192.168.100.8", 00:20:27.529 "trsvcid": "4420" 00:20:27.529 }, 00:20:27.529 "peer_address": { 00:20:27.529 "trtype": "RDMA", 00:20:27.529 "adrfam": "IPv4", 00:20:27.529 "traddr": "192.168.100.8", 00:20:27.530 "trsvcid": "46551" 00:20:27.530 }, 00:20:27.530 "auth": { 00:20:27.530 "state": "completed", 00:20:27.530 "digest": "sha384", 00:20:27.530 "dhgroup": "ffdhe3072" 00:20:27.530 } 00:20:27.530 } 00:20:27.530 ]' 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.530 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.790 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:27.790 04:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:28.360 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.620 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.879 00:20:28.879 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.879 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.879 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.139 { 00:20:29.139 "cntlid": 69, 00:20:29.139 "qid": 0, 00:20:29.139 "state": "enabled", 00:20:29.139 "thread": "nvmf_tgt_poll_group_000", 00:20:29.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:29.139 "listen_address": { 00:20:29.139 "trtype": "RDMA", 00:20:29.139 "adrfam": "IPv4", 00:20:29.139 "traddr": "192.168.100.8", 00:20:29.139 "trsvcid": "4420" 00:20:29.139 }, 00:20:29.139 "peer_address": { 00:20:29.139 "trtype": "RDMA", 00:20:29.139 "adrfam": "IPv4", 00:20:29.139 "traddr": "192.168.100.8", 00:20:29.139 "trsvcid": "42423" 00:20:29.139 }, 00:20:29.139 "auth": { 00:20:29.139 "state": "completed", 00:20:29.139 "digest": "sha384", 00:20:29.139 "dhgroup": "ffdhe3072" 00:20:29.139 } 00:20:29.139 } 00:20:29.139 ]' 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.139 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.399 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.399 04:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.399 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.399 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.399 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.659 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:29.659 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.229 04:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.489 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.748 00:20:30.748 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.748 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.748 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.008 { 00:20:31.008 "cntlid": 71, 00:20:31.008 "qid": 0, 00:20:31.008 "state": "enabled", 00:20:31.008 "thread": "nvmf_tgt_poll_group_000", 00:20:31.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:31.008 "listen_address": { 00:20:31.008 "trtype": "RDMA", 00:20:31.008 "adrfam": "IPv4", 00:20:31.008 "traddr": "192.168.100.8", 00:20:31.008 "trsvcid": "4420" 00:20:31.008 }, 00:20:31.008 "peer_address": { 00:20:31.008 "trtype": "RDMA", 00:20:31.008 "adrfam": "IPv4", 00:20:31.008 "traddr": "192.168.100.8", 00:20:31.008 "trsvcid": "59892" 00:20:31.008 }, 00:20:31.008 "auth": { 00:20:31.008 "state": "completed", 00:20:31.008 "digest": "sha384", 00:20:31.008 "dhgroup": "ffdhe3072" 00:20:31.008 } 00:20:31.008 } 00:20:31.008 ]' 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.008 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.267 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:31.267 04:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:31.836 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.096 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.355 04:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.615 00:20:32.615 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.615 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.615 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.875 { 00:20:32.875 "cntlid": 73, 00:20:32.875 "qid": 0, 00:20:32.875 "state": "enabled", 00:20:32.875 "thread": "nvmf_tgt_poll_group_000", 00:20:32.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:32.875 "listen_address": { 00:20:32.875 "trtype": "RDMA", 00:20:32.875 "adrfam": "IPv4", 00:20:32.875 "traddr": "192.168.100.8", 00:20:32.875 "trsvcid": "4420" 00:20:32.875 }, 00:20:32.875 "peer_address": { 00:20:32.875 "trtype": "RDMA", 00:20:32.875 "adrfam": "IPv4", 00:20:32.875 "traddr": "192.168.100.8", 00:20:32.875 "trsvcid": "47321" 00:20:32.875 }, 00:20:32.875 "auth": { 00:20:32.875 "state": "completed", 00:20:32.875 "digest": "sha384", 00:20:32.875 "dhgroup": "ffdhe4096" 00:20:32.875 } 00:20:32.875 } 00:20:32.875 ]' 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.875 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.135 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:33.136 04:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.706 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.965 04:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.225 00:20:34.225 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.225 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.225 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.486 { 00:20:34.486 "cntlid": 75, 00:20:34.486 "qid": 0, 00:20:34.486 "state": "enabled", 00:20:34.486 "thread": "nvmf_tgt_poll_group_000", 00:20:34.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:34.486 "listen_address": { 00:20:34.486 "trtype": "RDMA", 00:20:34.486 "adrfam": "IPv4", 00:20:34.486 "traddr": "192.168.100.8", 00:20:34.486 "trsvcid": "4420" 00:20:34.486 }, 00:20:34.486 "peer_address": { 00:20:34.486 "trtype": "RDMA", 00:20:34.486 "adrfam": "IPv4", 00:20:34.486 "traddr": "192.168.100.8", 00:20:34.486 "trsvcid": "43912" 00:20:34.486 }, 00:20:34.486 "auth": { 00:20:34.486 "state": "completed", 00:20:34.486 "digest": "sha384", 00:20:34.486 "dhgroup": "ffdhe4096" 00:20:34.486 } 00:20:34.486 } 00:20:34.486 ]' 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.486 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.746 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.746 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.746 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.746 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:34.746 04:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.686 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.946 00:20:35.946 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.946 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.946 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.206 { 00:20:36.206 "cntlid": 77, 00:20:36.206 "qid": 0, 00:20:36.206 "state": "enabled", 00:20:36.206 "thread": "nvmf_tgt_poll_group_000", 00:20:36.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:36.206 "listen_address": { 00:20:36.206 "trtype": "RDMA", 00:20:36.206 "adrfam": "IPv4", 00:20:36.206 "traddr": "192.168.100.8", 00:20:36.206 "trsvcid": "4420" 00:20:36.206 }, 00:20:36.206 "peer_address": { 00:20:36.206 "trtype": "RDMA", 00:20:36.206 "adrfam": "IPv4", 00:20:36.206 "traddr": "192.168.100.8", 00:20:36.206 "trsvcid": "44983" 00:20:36.206 }, 00:20:36.206 "auth": { 00:20:36.206 "state": "completed", 00:20:36.206 "digest": "sha384", 00:20:36.206 "dhgroup": "ffdhe4096" 00:20:36.206 } 00:20:36.206 } 00:20:36.206 ]' 00:20:36.206 04:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.206 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.206 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.465 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.465 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.465 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.465 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.465 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.725 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:36.725 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:37.295 04:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.295 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.555 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:37.555 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.556 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.815 00:20:37.815 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.815 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.815 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.075 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.075 { 00:20:38.075 "cntlid": 79, 00:20:38.075 "qid": 0, 00:20:38.075 "state": "enabled", 00:20:38.075 "thread": "nvmf_tgt_poll_group_000", 00:20:38.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:38.076 "listen_address": { 00:20:38.076 "trtype": "RDMA", 00:20:38.076 "adrfam": "IPv4", 00:20:38.076 "traddr": "192.168.100.8", 00:20:38.076 "trsvcid": "4420" 00:20:38.076 }, 00:20:38.076 "peer_address": { 00:20:38.076 "trtype": "RDMA", 00:20:38.076 "adrfam": "IPv4", 00:20:38.076 "traddr": "192.168.100.8", 00:20:38.076 "trsvcid": "43073" 00:20:38.076 }, 00:20:38.076 "auth": { 00:20:38.076 "state": "completed", 00:20:38.076 "digest": "sha384", 00:20:38.076 "dhgroup": "ffdhe4096" 00:20:38.076 } 00:20:38.076 } 00:20:38.076 ]' 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.076 04:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.336 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:38.336 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:38.904 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.164 04:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.423 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.683 00:20:39.683 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.683 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.683 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.943 { 00:20:39.943 "cntlid": 81, 00:20:39.943 "qid": 0, 00:20:39.943 "state": "enabled", 00:20:39.943 "thread": "nvmf_tgt_poll_group_000", 00:20:39.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.943 "listen_address": { 00:20:39.943 "trtype": "RDMA", 00:20:39.943 "adrfam": "IPv4", 00:20:39.943 "traddr": "192.168.100.8", 00:20:39.943 "trsvcid": "4420" 00:20:39.943 }, 00:20:39.943 "peer_address": { 00:20:39.943 "trtype": "RDMA", 00:20:39.943 "adrfam": "IPv4", 00:20:39.943 "traddr": "192.168.100.8", 00:20:39.943 "trsvcid": "38029" 00:20:39.943 }, 00:20:39.943 "auth": { 00:20:39.943 "state": "completed", 00:20:39.943 "digest": "sha384", 00:20:39.943 "dhgroup": "ffdhe6144" 00:20:39.943 } 00:20:39.943 } 00:20:39.943 ]' 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.943 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.204 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:40.204 04:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:40.774 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.035 04:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.605 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.605 { 00:20:41.605 "cntlid": 83, 00:20:41.605 "qid": 0, 00:20:41.605 "state": "enabled", 00:20:41.605 "thread": "nvmf_tgt_poll_group_000", 00:20:41.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:41.605 "listen_address": { 00:20:41.605 "trtype": "RDMA", 00:20:41.605 "adrfam": "IPv4", 00:20:41.605 "traddr": "192.168.100.8", 00:20:41.605 "trsvcid": "4420" 00:20:41.605 }, 00:20:41.605 "peer_address": { 00:20:41.605 "trtype": "RDMA", 00:20:41.605 "adrfam": "IPv4", 00:20:41.605 "traddr": "192.168.100.8", 00:20:41.605 "trsvcid": "58842" 00:20:41.605 }, 00:20:41.605 "auth": { 00:20:41.605 "state": "completed", 00:20:41.605 "digest": "sha384", 00:20:41.605 "dhgroup": "ffdhe6144" 00:20:41.605 } 00:20:41.605 } 00:20:41.605 ]' 00:20:41.605 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.865 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.124 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:42.125 04:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.695 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.955 04:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.215 00:20:43.215 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.215 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.215 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.476 { 00:20:43.476 "cntlid": 85, 00:20:43.476 "qid": 0, 00:20:43.476 "state": "enabled", 00:20:43.476 "thread": "nvmf_tgt_poll_group_000", 00:20:43.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:43.476 "listen_address": { 00:20:43.476 "trtype": "RDMA", 00:20:43.476 "adrfam": "IPv4", 00:20:43.476 "traddr": "192.168.100.8", 00:20:43.476 "trsvcid": "4420" 00:20:43.476 }, 00:20:43.476 "peer_address": { 00:20:43.476 "trtype": "RDMA", 00:20:43.476 "adrfam": "IPv4", 00:20:43.476 "traddr": "192.168.100.8", 00:20:43.476 "trsvcid": "35140" 00:20:43.476 }, 00:20:43.476 "auth": { 00:20:43.476 "state": "completed", 00:20:43.476 "digest": "sha384", 00:20:43.476 "dhgroup": "ffdhe6144" 00:20:43.476 } 00:20:43.476 } 00:20:43.476 ]' 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.476 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:43.736 04:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.708 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.278 00:20:45.278 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.278 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.278 04:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.278 { 00:20:45.278 "cntlid": 87, 00:20:45.278 "qid": 0, 00:20:45.278 "state": "enabled", 00:20:45.278 "thread": "nvmf_tgt_poll_group_000", 00:20:45.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:45.278 "listen_address": { 00:20:45.278 "trtype": "RDMA", 00:20:45.278 "adrfam": "IPv4", 00:20:45.278 "traddr": "192.168.100.8", 00:20:45.278 "trsvcid": "4420" 00:20:45.278 }, 00:20:45.278 "peer_address": { 00:20:45.278 "trtype": "RDMA", 00:20:45.278 "adrfam": "IPv4", 00:20:45.278 "traddr": "192.168.100.8", 00:20:45.278 "trsvcid": "53860" 00:20:45.278 }, 00:20:45.278 "auth": { 00:20:45.278 "state": "completed", 00:20:45.278 "digest": "sha384", 00:20:45.278 "dhgroup": "ffdhe6144" 00:20:45.278 } 00:20:45.278 } 00:20:45.278 ]' 00:20:45.278 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.538 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.798 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:45.798 04:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.369 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.630 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.200 00:20:47.200 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.200 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.200 04:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.200 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.200 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.200 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.200 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.460 { 00:20:47.460 "cntlid": 89, 00:20:47.460 "qid": 0, 00:20:47.460 "state": "enabled", 00:20:47.460 "thread": "nvmf_tgt_poll_group_000", 00:20:47.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:47.460 "listen_address": { 00:20:47.460 "trtype": "RDMA", 00:20:47.460 "adrfam": "IPv4", 00:20:47.460 "traddr": "192.168.100.8", 00:20:47.460 "trsvcid": "4420" 00:20:47.460 }, 00:20:47.460 "peer_address": { 00:20:47.460 "trtype": "RDMA", 00:20:47.460 "adrfam": "IPv4", 00:20:47.460 "traddr": "192.168.100.8", 00:20:47.460 "trsvcid": "51713" 00:20:47.460 }, 00:20:47.460 "auth": { 00:20:47.460 "state": "completed", 00:20:47.460 "digest": "sha384", 00:20:47.460 "dhgroup": "ffdhe8192" 00:20:47.460 } 00:20:47.460 } 00:20:47.460 ]' 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.460 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.720 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:47.720 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:48.291 04:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.291 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:48.291 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.291 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.292 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.292 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.292 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.292 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.552 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.122 00:20:49.122 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.123 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.123 04:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.384 { 00:20:49.384 "cntlid": 91, 00:20:49.384 "qid": 0, 00:20:49.384 "state": "enabled", 00:20:49.384 "thread": "nvmf_tgt_poll_group_000", 00:20:49.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:49.384 "listen_address": { 00:20:49.384 "trtype": "RDMA", 00:20:49.384 "adrfam": "IPv4", 00:20:49.384 "traddr": "192.168.100.8", 00:20:49.384 "trsvcid": "4420" 00:20:49.384 }, 00:20:49.384 "peer_address": { 00:20:49.384 "trtype": "RDMA", 00:20:49.384 "adrfam": "IPv4", 00:20:49.384 "traddr": "192.168.100.8", 00:20:49.384 "trsvcid": "49381" 00:20:49.384 }, 00:20:49.384 "auth": { 00:20:49.384 "state": "completed", 00:20:49.384 "digest": "sha384", 00:20:49.384 "dhgroup": "ffdhe8192" 00:20:49.384 } 00:20:49.384 } 00:20:49.384 ]' 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.384 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.644 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:49.644 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:50.213 04:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.474 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.475 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.475 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.475 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.475 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.044 00:20:51.044 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.044 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.044 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.304 { 00:20:51.304 "cntlid": 93, 00:20:51.304 "qid": 0, 00:20:51.304 "state": "enabled", 00:20:51.304 "thread": "nvmf_tgt_poll_group_000", 00:20:51.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:51.304 "listen_address": { 00:20:51.304 "trtype": "RDMA", 00:20:51.304 "adrfam": "IPv4", 00:20:51.304 "traddr": "192.168.100.8", 00:20:51.304 "trsvcid": "4420" 00:20:51.304 }, 00:20:51.304 "peer_address": { 00:20:51.304 "trtype": "RDMA", 00:20:51.304 "adrfam": "IPv4", 00:20:51.304 "traddr": "192.168.100.8", 00:20:51.304 "trsvcid": "41347" 00:20:51.304 }, 00:20:51.304 "auth": { 00:20:51.304 "state": "completed", 00:20:51.304 "digest": "sha384", 00:20:51.304 "dhgroup": "ffdhe8192" 00:20:51.304 } 00:20:51.304 } 00:20:51.304 ]' 00:20:51.304 04:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.304 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.564 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:51.564 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:52.134 04:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.394 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:52.654 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.655 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.655 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.655 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.655 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.655 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.914 00:20:52.914 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.914 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.914 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.174 { 00:20:53.174 "cntlid": 95, 00:20:53.174 "qid": 0, 00:20:53.174 "state": "enabled", 00:20:53.174 "thread": "nvmf_tgt_poll_group_000", 00:20:53.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:53.174 "listen_address": { 00:20:53.174 "trtype": "RDMA", 00:20:53.174 "adrfam": "IPv4", 00:20:53.174 "traddr": "192.168.100.8", 00:20:53.174 "trsvcid": "4420" 00:20:53.174 }, 00:20:53.174 "peer_address": { 00:20:53.174 "trtype": "RDMA", 00:20:53.174 "adrfam": "IPv4", 00:20:53.174 "traddr": "192.168.100.8", 00:20:53.174 "trsvcid": "58430" 00:20:53.174 }, 00:20:53.174 "auth": { 00:20:53.174 "state": "completed", 00:20:53.174 "digest": "sha384", 00:20:53.174 "dhgroup": "ffdhe8192" 00:20:53.174 } 00:20:53.174 } 00:20:53.174 ]' 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.174 04:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.433 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.433 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.434 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.434 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.434 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.693 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:53.693 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:20:54.263 04:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.263 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.523 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.783 00:20:54.783 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.783 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.783 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.043 { 00:20:55.043 "cntlid": 97, 00:20:55.043 "qid": 0, 00:20:55.043 "state": "enabled", 00:20:55.043 "thread": "nvmf_tgt_poll_group_000", 00:20:55.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:55.043 "listen_address": { 00:20:55.043 "trtype": "RDMA", 00:20:55.043 "adrfam": "IPv4", 00:20:55.043 "traddr": "192.168.100.8", 00:20:55.043 "trsvcid": "4420" 00:20:55.043 }, 00:20:55.043 "peer_address": { 00:20:55.043 "trtype": "RDMA", 00:20:55.043 "adrfam": "IPv4", 00:20:55.043 "traddr": "192.168.100.8", 00:20:55.043 "trsvcid": "34767" 00:20:55.043 }, 00:20:55.043 "auth": { 00:20:55.043 "state": "completed", 00:20:55.043 "digest": "sha512", 00:20:55.043 "dhgroup": "null" 00:20:55.043 } 00:20:55.043 } 00:20:55.043 ]' 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.043 04:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.304 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:55.304 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:20:55.873 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.134 04:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.394 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.654 { 00:20:56.654 "cntlid": 99, 00:20:56.654 "qid": 0, 00:20:56.654 "state": "enabled", 00:20:56.654 "thread": "nvmf_tgt_poll_group_000", 00:20:56.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:56.654 "listen_address": { 00:20:56.654 "trtype": "RDMA", 00:20:56.654 "adrfam": "IPv4", 00:20:56.654 "traddr": "192.168.100.8", 00:20:56.654 "trsvcid": "4420" 00:20:56.654 }, 00:20:56.654 "peer_address": { 00:20:56.654 "trtype": "RDMA", 00:20:56.654 "adrfam": "IPv4", 00:20:56.654 "traddr": "192.168.100.8", 00:20:56.654 "trsvcid": "33363" 00:20:56.654 }, 00:20:56.654 "auth": { 00:20:56.654 "state": "completed", 00:20:56.654 "digest": "sha512", 00:20:56.654 "dhgroup": "null" 00:20:56.654 } 00:20:56.654 } 00:20:56.654 ]' 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.654 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.914 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.914 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.914 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.914 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.914 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.173 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:57.173 04:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.744 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.005 04:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.265 00:20:58.265 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.265 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.265 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.526 { 00:20:58.526 "cntlid": 101, 00:20:58.526 "qid": 0, 00:20:58.526 "state": "enabled", 00:20:58.526 "thread": "nvmf_tgt_poll_group_000", 00:20:58.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:58.526 "listen_address": { 00:20:58.526 "trtype": "RDMA", 00:20:58.526 "adrfam": "IPv4", 00:20:58.526 "traddr": "192.168.100.8", 00:20:58.526 "trsvcid": "4420" 00:20:58.526 }, 00:20:58.526 "peer_address": { 00:20:58.526 "trtype": "RDMA", 00:20:58.526 "adrfam": "IPv4", 00:20:58.526 "traddr": "192.168.100.8", 00:20:58.526 "trsvcid": "60227" 00:20:58.526 }, 00:20:58.526 "auth": { 00:20:58.526 "state": "completed", 00:20:58.526 "digest": "sha512", 00:20:58.526 "dhgroup": "null" 00:20:58.526 } 00:20:58.526 } 00:20:58.526 ]' 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.526 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.786 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.786 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.786 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.786 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:58.786 04:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:20:59.356 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.617 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.877 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.137 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.137 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.398 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.398 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.398 { 00:21:00.398 "cntlid": 103, 00:21:00.398 "qid": 0, 00:21:00.398 "state": "enabled", 00:21:00.398 "thread": "nvmf_tgt_poll_group_000", 00:21:00.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:00.398 "listen_address": { 00:21:00.398 "trtype": "RDMA", 00:21:00.398 "adrfam": "IPv4", 00:21:00.398 "traddr": "192.168.100.8", 00:21:00.398 "trsvcid": "4420" 00:21:00.398 }, 00:21:00.398 "peer_address": { 00:21:00.398 "trtype": "RDMA", 00:21:00.398 "adrfam": "IPv4", 00:21:00.398 "traddr": "192.168.100.8", 00:21:00.398 "trsvcid": "34459" 00:21:00.398 }, 00:21:00.398 "auth": { 00:21:00.398 "state": "completed", 00:21:00.398 "digest": "sha512", 00:21:00.398 "dhgroup": "null" 00:21:00.398 } 00:21:00.398 } 00:21:00.398 ]' 00:21:00.398 04:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.398 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.657 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:00.657 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:01.227 04:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.227 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.487 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.747 00:21:01.747 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.747 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.747 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.006 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.006 { 00:21:02.006 "cntlid": 105, 00:21:02.006 "qid": 0, 00:21:02.006 "state": "enabled", 00:21:02.006 "thread": "nvmf_tgt_poll_group_000", 00:21:02.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.007 "listen_address": { 00:21:02.007 "trtype": "RDMA", 00:21:02.007 "adrfam": "IPv4", 00:21:02.007 "traddr": "192.168.100.8", 00:21:02.007 "trsvcid": "4420" 00:21:02.007 }, 00:21:02.007 "peer_address": { 00:21:02.007 "trtype": "RDMA", 00:21:02.007 "adrfam": "IPv4", 00:21:02.007 "traddr": "192.168.100.8", 00:21:02.007 "trsvcid": "42934" 00:21:02.007 }, 00:21:02.007 "auth": { 00:21:02.007 "state": "completed", 00:21:02.007 "digest": "sha512", 00:21:02.007 "dhgroup": "ffdhe2048" 00:21:02.007 } 00:21:02.007 } 00:21:02.007 ]' 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.007 04:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.266 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:02.266 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:02.836 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.095 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.355 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.356 04:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.616 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.616 { 00:21:03.616 "cntlid": 107, 00:21:03.616 "qid": 0, 00:21:03.616 "state": "enabled", 00:21:03.616 "thread": "nvmf_tgt_poll_group_000", 00:21:03.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:03.616 "listen_address": { 00:21:03.616 "trtype": "RDMA", 00:21:03.616 "adrfam": "IPv4", 00:21:03.616 "traddr": "192.168.100.8", 00:21:03.616 "trsvcid": "4420" 00:21:03.616 }, 00:21:03.616 "peer_address": { 00:21:03.616 "trtype": "RDMA", 00:21:03.616 "adrfam": "IPv4", 00:21:03.616 "traddr": "192.168.100.8", 00:21:03.616 "trsvcid": "52385" 00:21:03.616 }, 00:21:03.616 "auth": { 00:21:03.616 "state": "completed", 00:21:03.616 "digest": "sha512", 00:21:03.616 "dhgroup": "ffdhe2048" 00:21:03.616 } 00:21:03.616 } 00:21:03.616 ]' 00:21:03.616 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.876 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.136 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:04.136 04:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.704 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.964 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.224 00:21:05.224 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.224 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.224 04:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.484 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.484 { 00:21:05.485 "cntlid": 109, 00:21:05.485 "qid": 0, 00:21:05.485 "state": "enabled", 00:21:05.485 "thread": "nvmf_tgt_poll_group_000", 00:21:05.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:05.485 "listen_address": { 00:21:05.485 "trtype": "RDMA", 00:21:05.485 "adrfam": "IPv4", 00:21:05.485 "traddr": "192.168.100.8", 00:21:05.485 "trsvcid": "4420" 00:21:05.485 }, 00:21:05.485 "peer_address": { 00:21:05.485 "trtype": "RDMA", 00:21:05.485 "adrfam": "IPv4", 00:21:05.485 "traddr": "192.168.100.8", 00:21:05.485 "trsvcid": "53198" 00:21:05.485 }, 00:21:05.485 "auth": { 00:21:05.485 "state": "completed", 00:21:05.485 "digest": "sha512", 00:21:05.485 "dhgroup": "ffdhe2048" 00:21:05.485 } 00:21:05.485 } 00:21:05.485 ]' 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.485 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.745 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:05.745 04:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:06.312 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.571 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.830 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.831 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.831 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.831 00:21:07.090 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.090 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.090 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.090 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.091 { 00:21:07.091 "cntlid": 111, 00:21:07.091 "qid": 0, 00:21:07.091 "state": "enabled", 00:21:07.091 "thread": "nvmf_tgt_poll_group_000", 00:21:07.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:07.091 "listen_address": { 00:21:07.091 "trtype": "RDMA", 00:21:07.091 "adrfam": "IPv4", 00:21:07.091 "traddr": "192.168.100.8", 00:21:07.091 "trsvcid": "4420" 00:21:07.091 }, 00:21:07.091 "peer_address": { 00:21:07.091 "trtype": "RDMA", 00:21:07.091 "adrfam": "IPv4", 00:21:07.091 "traddr": "192.168.100.8", 00:21:07.091 "trsvcid": "60532" 00:21:07.091 }, 00:21:07.091 "auth": { 00:21:07.091 "state": "completed", 00:21:07.091 "digest": "sha512", 00:21:07.091 "dhgroup": "ffdhe2048" 00:21:07.091 } 00:21:07.091 } 00:21:07.091 ]' 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.091 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.351 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.351 04:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.351 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.351 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.351 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.611 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:07.611 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:08.181 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.182 04:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.441 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.442 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.702 00:21:08.702 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.702 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.702 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.962 { 00:21:08.962 "cntlid": 113, 00:21:08.962 "qid": 0, 00:21:08.962 "state": "enabled", 00:21:08.962 "thread": "nvmf_tgt_poll_group_000", 00:21:08.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.962 "listen_address": { 00:21:08.962 "trtype": "RDMA", 00:21:08.962 "adrfam": "IPv4", 00:21:08.962 "traddr": "192.168.100.8", 00:21:08.962 "trsvcid": "4420" 00:21:08.962 }, 00:21:08.962 "peer_address": { 00:21:08.962 "trtype": "RDMA", 00:21:08.962 "adrfam": "IPv4", 00:21:08.962 "traddr": "192.168.100.8", 00:21:08.962 "trsvcid": "55844" 00:21:08.962 }, 00:21:08.962 "auth": { 00:21:08.962 "state": "completed", 00:21:08.962 "digest": "sha512", 00:21:08.962 "dhgroup": "ffdhe3072" 00:21:08.962 } 00:21:08.962 } 00:21:08.962 ]' 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.962 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.222 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:09.222 04:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:09.792 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.052 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.053 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.312 04:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.312 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.579 { 00:21:10.579 "cntlid": 115, 00:21:10.579 "qid": 0, 00:21:10.579 "state": "enabled", 00:21:10.579 "thread": "nvmf_tgt_poll_group_000", 00:21:10.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:10.579 "listen_address": { 00:21:10.579 "trtype": "RDMA", 00:21:10.579 "adrfam": "IPv4", 00:21:10.579 "traddr": "192.168.100.8", 00:21:10.579 "trsvcid": "4420" 00:21:10.579 }, 00:21:10.579 "peer_address": { 00:21:10.579 "trtype": "RDMA", 00:21:10.579 "adrfam": "IPv4", 00:21:10.579 "traddr": "192.168.100.8", 00:21:10.579 "trsvcid": "47606" 00:21:10.579 }, 00:21:10.579 "auth": { 00:21:10.579 "state": "completed", 00:21:10.579 "digest": "sha512", 00:21:10.579 "dhgroup": "ffdhe3072" 00:21:10.579 } 00:21:10.579 } 00:21:10.579 ]' 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.579 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.837 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.837 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.837 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.837 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.837 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.097 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:11.097 04:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.667 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.927 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.187 00:21:12.187 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.187 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.187 04:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.447 { 00:21:12.447 "cntlid": 117, 00:21:12.447 "qid": 0, 00:21:12.447 "state": "enabled", 00:21:12.447 "thread": "nvmf_tgt_poll_group_000", 00:21:12.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:12.447 "listen_address": { 00:21:12.447 "trtype": "RDMA", 00:21:12.447 "adrfam": "IPv4", 00:21:12.447 "traddr": "192.168.100.8", 00:21:12.447 "trsvcid": "4420" 00:21:12.447 }, 00:21:12.447 "peer_address": { 00:21:12.447 "trtype": "RDMA", 00:21:12.447 "adrfam": "IPv4", 00:21:12.447 "traddr": "192.168.100.8", 00:21:12.447 "trsvcid": "45009" 00:21:12.447 }, 00:21:12.447 "auth": { 00:21:12.447 "state": "completed", 00:21:12.447 "digest": "sha512", 00:21:12.447 "dhgroup": "ffdhe3072" 00:21:12.447 } 00:21:12.447 } 00:21:12.447 ]' 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.447 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.707 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:12.707 04:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:13.277 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.538 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.798 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.059 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.059 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.319 { 00:21:14.319 "cntlid": 119, 00:21:14.319 "qid": 0, 00:21:14.319 "state": "enabled", 00:21:14.319 "thread": "nvmf_tgt_poll_group_000", 00:21:14.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:14.319 "listen_address": { 00:21:14.319 "trtype": "RDMA", 00:21:14.319 "adrfam": "IPv4", 00:21:14.319 "traddr": "192.168.100.8", 00:21:14.319 "trsvcid": "4420" 00:21:14.319 }, 00:21:14.319 "peer_address": { 00:21:14.319 "trtype": "RDMA", 00:21:14.319 "adrfam": "IPv4", 00:21:14.319 "traddr": "192.168.100.8", 00:21:14.319 "trsvcid": "48192" 00:21:14.319 }, 00:21:14.319 "auth": { 00:21:14.319 "state": "completed", 00:21:14.319 "digest": "sha512", 00:21:14.319 "dhgroup": "ffdhe3072" 00:21:14.319 } 00:21:14.319 } 00:21:14.319 ]' 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.319 04:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.579 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:14.579 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.149 04:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.409 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.669 00:21:15.669 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.669 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.669 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.930 { 00:21:15.930 "cntlid": 121, 00:21:15.930 "qid": 0, 00:21:15.930 "state": "enabled", 00:21:15.930 "thread": "nvmf_tgt_poll_group_000", 00:21:15.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:15.930 "listen_address": { 00:21:15.930 "trtype": "RDMA", 00:21:15.930 "adrfam": "IPv4", 00:21:15.930 "traddr": "192.168.100.8", 00:21:15.930 "trsvcid": "4420" 00:21:15.930 }, 00:21:15.930 "peer_address": { 00:21:15.930 "trtype": "RDMA", 00:21:15.930 "adrfam": "IPv4", 00:21:15.930 "traddr": "192.168.100.8", 00:21:15.930 "trsvcid": "37548" 00:21:15.930 }, 00:21:15.930 "auth": { 00:21:15.930 "state": "completed", 00:21:15.930 "digest": "sha512", 00:21:15.930 "dhgroup": "ffdhe4096" 00:21:15.930 } 00:21:15.930 } 00:21:15.930 ]' 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.930 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.190 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.190 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.190 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.190 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:16.190 04:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.131 04:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.392 00:21:17.392 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.392 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.392 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.652 { 00:21:17.652 "cntlid": 123, 00:21:17.652 "qid": 0, 00:21:17.652 "state": "enabled", 00:21:17.652 "thread": "nvmf_tgt_poll_group_000", 00:21:17.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:17.652 "listen_address": { 00:21:17.652 "trtype": "RDMA", 00:21:17.652 "adrfam": "IPv4", 00:21:17.652 "traddr": "192.168.100.8", 00:21:17.652 "trsvcid": "4420" 00:21:17.652 }, 00:21:17.652 "peer_address": { 00:21:17.652 "trtype": "RDMA", 00:21:17.652 "adrfam": "IPv4", 00:21:17.652 "traddr": "192.168.100.8", 00:21:17.652 "trsvcid": "32997" 00:21:17.652 }, 00:21:17.652 "auth": { 00:21:17.652 "state": "completed", 00:21:17.652 "digest": "sha512", 00:21:17.652 "dhgroup": "ffdhe4096" 00:21:17.652 } 00:21:17.652 } 00:21:17.652 ]' 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.652 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.913 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.913 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.913 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.913 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.913 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.173 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:18.173 04:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.750 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.012 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:19.012 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.012 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.013 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.272 00:21:19.272 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.272 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.273 04:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.533 { 00:21:19.533 "cntlid": 125, 00:21:19.533 "qid": 0, 00:21:19.533 "state": "enabled", 00:21:19.533 "thread": "nvmf_tgt_poll_group_000", 00:21:19.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:19.533 "listen_address": { 00:21:19.533 "trtype": "RDMA", 00:21:19.533 "adrfam": "IPv4", 00:21:19.533 "traddr": "192.168.100.8", 00:21:19.533 "trsvcid": "4420" 00:21:19.533 }, 00:21:19.533 "peer_address": { 00:21:19.533 "trtype": "RDMA", 00:21:19.533 "adrfam": "IPv4", 00:21:19.533 "traddr": "192.168.100.8", 00:21:19.533 "trsvcid": "40937" 00:21:19.533 }, 00:21:19.533 "auth": { 00:21:19.533 "state": "completed", 00:21:19.533 "digest": "sha512", 00:21:19.533 "dhgroup": "ffdhe4096" 00:21:19.533 } 00:21:19.533 } 00:21:19.533 ]' 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.533 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.793 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:19.793 04:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:20.363 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.623 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.193 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.193 { 00:21:21.193 "cntlid": 127, 00:21:21.193 "qid": 0, 00:21:21.193 "state": "enabled", 00:21:21.193 "thread": "nvmf_tgt_poll_group_000", 00:21:21.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:21.193 "listen_address": { 00:21:21.193 "trtype": "RDMA", 00:21:21.193 "adrfam": "IPv4", 00:21:21.193 "traddr": "192.168.100.8", 00:21:21.193 "trsvcid": "4420" 00:21:21.193 }, 00:21:21.193 "peer_address": { 00:21:21.193 "trtype": "RDMA", 00:21:21.193 "adrfam": "IPv4", 00:21:21.193 "traddr": "192.168.100.8", 00:21:21.193 "trsvcid": "58767" 00:21:21.193 }, 00:21:21.193 "auth": { 00:21:21.193 "state": "completed", 00:21:21.193 "digest": "sha512", 00:21:21.193 "dhgroup": "ffdhe4096" 00:21:21.193 } 00:21:21.193 } 00:21:21.193 ]' 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.193 04:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:21.454 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.395 04:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.395 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.965 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.965 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.965 { 00:21:22.965 "cntlid": 129, 00:21:22.965 "qid": 0, 00:21:22.965 "state": "enabled", 00:21:22.965 "thread": "nvmf_tgt_poll_group_000", 00:21:22.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:22.966 "listen_address": { 00:21:22.966 "trtype": "RDMA", 00:21:22.966 "adrfam": "IPv4", 00:21:22.966 "traddr": "192.168.100.8", 00:21:22.966 "trsvcid": "4420" 00:21:22.966 }, 00:21:22.966 "peer_address": { 00:21:22.966 "trtype": "RDMA", 00:21:22.966 "adrfam": "IPv4", 00:21:22.966 "traddr": "192.168.100.8", 00:21:22.966 "trsvcid": "58207" 00:21:22.966 }, 00:21:22.966 "auth": { 00:21:22.966 "state": "completed", 00:21:22.966 "digest": "sha512", 00:21:22.966 "dhgroup": "ffdhe6144" 00:21:22.966 } 00:21:22.966 } 00:21:22.966 ]' 00:21:22.966 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.966 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.966 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.225 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.225 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.225 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.225 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.225 04:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.485 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:23.485 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.055 04:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.316 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.576 00:21:24.576 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.576 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.576 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.836 { 00:21:24.836 "cntlid": 131, 00:21:24.836 "qid": 0, 00:21:24.836 "state": "enabled", 00:21:24.836 "thread": "nvmf_tgt_poll_group_000", 00:21:24.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:24.836 "listen_address": { 00:21:24.836 "trtype": "RDMA", 00:21:24.836 "adrfam": "IPv4", 00:21:24.836 "traddr": "192.168.100.8", 00:21:24.836 "trsvcid": "4420" 00:21:24.836 }, 00:21:24.836 "peer_address": { 00:21:24.836 "trtype": "RDMA", 00:21:24.836 "adrfam": "IPv4", 00:21:24.836 "traddr": "192.168.100.8", 00:21:24.836 "trsvcid": "45433" 00:21:24.836 }, 00:21:24.836 "auth": { 00:21:24.836 "state": "completed", 00:21:24.836 "digest": "sha512", 00:21:24.836 "dhgroup": "ffdhe6144" 00:21:24.836 } 00:21:24.836 } 00:21:24.836 ]' 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.836 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.096 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.096 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.096 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.096 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.096 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.356 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:25.356 04:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.927 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.188 04:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.448 00:21:26.448 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.448 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.448 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.708 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.708 { 00:21:26.708 "cntlid": 133, 00:21:26.708 "qid": 0, 00:21:26.708 "state": "enabled", 00:21:26.708 "thread": "nvmf_tgt_poll_group_000", 00:21:26.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:26.708 "listen_address": { 00:21:26.708 "trtype": "RDMA", 00:21:26.708 "adrfam": "IPv4", 00:21:26.708 "traddr": "192.168.100.8", 00:21:26.708 "trsvcid": "4420" 00:21:26.708 }, 00:21:26.708 "peer_address": { 00:21:26.708 "trtype": "RDMA", 00:21:26.708 "adrfam": "IPv4", 00:21:26.708 "traddr": "192.168.100.8", 00:21:26.708 "trsvcid": "41205" 00:21:26.708 }, 00:21:26.708 "auth": { 00:21:26.708 "state": "completed", 00:21:26.709 "digest": "sha512", 00:21:26.709 "dhgroup": "ffdhe6144" 00:21:26.709 } 00:21:26.709 } 00:21:26.709 ]' 00:21:26.709 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.709 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.709 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.709 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.709 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.970 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.970 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.970 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.970 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:26.970 04:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:27.540 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.800 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:27.800 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.800 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.800 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.801 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.801 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.801 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.062 04:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.322 00:21:28.322 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.322 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.322 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.583 { 00:21:28.583 "cntlid": 135, 00:21:28.583 "qid": 0, 00:21:28.583 "state": "enabled", 00:21:28.583 "thread": "nvmf_tgt_poll_group_000", 00:21:28.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:28.583 "listen_address": { 00:21:28.583 "trtype": "RDMA", 00:21:28.583 "adrfam": "IPv4", 00:21:28.583 "traddr": "192.168.100.8", 00:21:28.583 "trsvcid": "4420" 00:21:28.583 }, 00:21:28.583 "peer_address": { 00:21:28.583 "trtype": "RDMA", 00:21:28.583 "adrfam": "IPv4", 00:21:28.583 "traddr": "192.168.100.8", 00:21:28.583 "trsvcid": "53350" 00:21:28.583 }, 00:21:28.583 "auth": { 00:21:28.583 "state": "completed", 00:21:28.583 "digest": "sha512", 00:21:28.583 "dhgroup": "ffdhe6144" 00:21:28.583 } 00:21:28.583 } 00:21:28.583 ]' 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.583 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.844 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:28.844 04:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:29.414 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.675 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.935 04:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.195 00:21:30.195 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.195 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.195 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.456 { 00:21:30.456 "cntlid": 137, 00:21:30.456 "qid": 0, 00:21:30.456 "state": "enabled", 00:21:30.456 "thread": "nvmf_tgt_poll_group_000", 00:21:30.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:30.456 "listen_address": { 00:21:30.456 "trtype": "RDMA", 00:21:30.456 "adrfam": "IPv4", 00:21:30.456 "traddr": "192.168.100.8", 00:21:30.456 "trsvcid": "4420" 00:21:30.456 }, 00:21:30.456 "peer_address": { 00:21:30.456 "trtype": "RDMA", 00:21:30.456 "adrfam": "IPv4", 00:21:30.456 "traddr": "192.168.100.8", 00:21:30.456 "trsvcid": "39309" 00:21:30.456 }, 00:21:30.456 "auth": { 00:21:30.456 "state": "completed", 00:21:30.456 "digest": "sha512", 00:21:30.456 "dhgroup": "ffdhe8192" 00:21:30.456 } 00:21:30.456 } 00:21:30.456 ]' 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.456 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.715 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:30.716 04:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.656 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.226 00:21:32.226 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.226 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.226 04:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.487 { 00:21:32.487 "cntlid": 139, 00:21:32.487 "qid": 0, 00:21:32.487 "state": "enabled", 00:21:32.487 "thread": "nvmf_tgt_poll_group_000", 00:21:32.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:32.487 "listen_address": { 00:21:32.487 "trtype": "RDMA", 00:21:32.487 "adrfam": "IPv4", 00:21:32.487 "traddr": "192.168.100.8", 00:21:32.487 "trsvcid": "4420" 00:21:32.487 }, 00:21:32.487 "peer_address": { 00:21:32.487 "trtype": "RDMA", 00:21:32.487 "adrfam": "IPv4", 00:21:32.487 "traddr": "192.168.100.8", 00:21:32.487 "trsvcid": "43679" 00:21:32.487 }, 00:21:32.487 "auth": { 00:21:32.487 "state": "completed", 00:21:32.487 "digest": "sha512", 00:21:32.487 "dhgroup": "ffdhe8192" 00:21:32.487 } 00:21:32.487 } 00:21:32.487 ]' 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.487 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.488 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.748 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:32.748 04:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: --dhchap-ctrl-secret DHHC-1:02:MjEzNzMwYmI5MTNjMzA4NjU1NjVkNTMxNDdkMzMxZmM1MTkwNWRlZjA5YTU2ZGJlv288LA==: 00:21:33.318 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.578 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.839 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.099 00:21:34.099 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.099 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.099 04:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.359 { 00:21:34.359 "cntlid": 141, 00:21:34.359 "qid": 0, 00:21:34.359 "state": "enabled", 00:21:34.359 "thread": "nvmf_tgt_poll_group_000", 00:21:34.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:34.359 "listen_address": { 00:21:34.359 "trtype": "RDMA", 00:21:34.359 "adrfam": "IPv4", 00:21:34.359 "traddr": "192.168.100.8", 00:21:34.359 "trsvcid": "4420" 00:21:34.359 }, 00:21:34.359 "peer_address": { 00:21:34.359 "trtype": "RDMA", 00:21:34.359 "adrfam": "IPv4", 00:21:34.359 "traddr": "192.168.100.8", 00:21:34.359 "trsvcid": "43461" 00:21:34.359 }, 00:21:34.359 "auth": { 00:21:34.359 "state": "completed", 00:21:34.359 "digest": "sha512", 00:21:34.359 "dhgroup": "ffdhe8192" 00:21:34.359 } 00:21:34.359 } 00:21:34.359 ]' 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.359 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:34.619 04:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:01:YmMzMTA4YmY3YjFmYzkyYWMzNTJkZThhN2Q2NTM5MDiQ9fV6: 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.560 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.131 00:21:36.131 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.131 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.131 04:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.391 { 00:21:36.391 "cntlid": 143, 00:21:36.391 "qid": 0, 00:21:36.391 "state": "enabled", 00:21:36.391 "thread": "nvmf_tgt_poll_group_000", 00:21:36.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:36.391 "listen_address": { 00:21:36.391 "trtype": "RDMA", 00:21:36.391 "adrfam": "IPv4", 00:21:36.391 "traddr": "192.168.100.8", 00:21:36.391 "trsvcid": "4420" 00:21:36.391 }, 00:21:36.391 "peer_address": { 00:21:36.391 "trtype": "RDMA", 00:21:36.391 "adrfam": "IPv4", 00:21:36.391 "traddr": "192.168.100.8", 00:21:36.391 "trsvcid": "35618" 00:21:36.391 }, 00:21:36.391 "auth": { 00:21:36.391 "state": "completed", 00:21:36.391 "digest": "sha512", 00:21:36.391 "dhgroup": "ffdhe8192" 00:21:36.391 } 00:21:36.391 } 00:21:36.391 ]' 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.391 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.651 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:36.651 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:37.222 04:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.482 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.742 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.001 00:21:38.001 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.001 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.001 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.261 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.261 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.261 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.261 04:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.261 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.261 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.261 { 00:21:38.262 "cntlid": 145, 00:21:38.262 "qid": 0, 00:21:38.262 "state": "enabled", 00:21:38.262 "thread": "nvmf_tgt_poll_group_000", 00:21:38.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:38.262 "listen_address": { 00:21:38.262 "trtype": "RDMA", 00:21:38.262 "adrfam": "IPv4", 00:21:38.262 "traddr": "192.168.100.8", 00:21:38.262 "trsvcid": "4420" 00:21:38.262 }, 00:21:38.262 "peer_address": { 00:21:38.262 "trtype": "RDMA", 00:21:38.262 "adrfam": "IPv4", 00:21:38.262 "traddr": "192.168.100.8", 00:21:38.262 "trsvcid": "60834" 00:21:38.262 }, 00:21:38.262 "auth": { 00:21:38.262 "state": "completed", 00:21:38.262 "digest": "sha512", 00:21:38.262 "dhgroup": "ffdhe8192" 00:21:38.262 } 00:21:38.262 } 00:21:38.262 ]' 00:21:38.262 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.262 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.262 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.262 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:38.522 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YzUyZWFlZWFmNGQ5MTBjNGU3YjljOGYyYzE0YjNkMzAyZDkxMzU2NjUzZmQ1ZDdjnS/LFQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI0ZTkzMzg4MDQzZjY4NWRhYjQxZGQ5ZDYyNGUzMTFjZmVkYzM0NmI2YjFjMDVlZThkNTMzZTJkZWQwZjQzM5TNdes=: 00:21:39.462 04:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:39.462 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:39.722 request: 00:21:39.722 { 00:21:39.722 "name": "nvme0", 00:21:39.722 "trtype": "rdma", 00:21:39.722 "traddr": "192.168.100.8", 00:21:39.722 "adrfam": "ipv4", 00:21:39.722 "trsvcid": "4420", 00:21:39.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:39.722 "prchk_reftag": false, 00:21:39.722 "prchk_guard": false, 00:21:39.722 "hdgst": false, 00:21:39.722 "ddgst": false, 00:21:39.722 "dhchap_key": "key2", 00:21:39.722 "allow_unrecognized_csi": false, 00:21:39.722 "method": "bdev_nvme_attach_controller", 00:21:39.722 "req_id": 1 00:21:39.722 } 00:21:39.722 Got JSON-RPC error response 00:21:39.722 response: 00:21:39.722 { 00:21:39.722 "code": -5, 00:21:39.722 "message": "Input/output error" 00:21:39.722 } 00:21:39.981 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.982 04:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.241 request: 00:21:40.241 { 00:21:40.241 "name": "nvme0", 00:21:40.241 "trtype": "rdma", 00:21:40.241 "traddr": "192.168.100.8", 00:21:40.241 "adrfam": "ipv4", 00:21:40.241 "trsvcid": "4420", 00:21:40.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.241 "prchk_reftag": false, 00:21:40.241 "prchk_guard": false, 00:21:40.241 "hdgst": false, 00:21:40.241 "ddgst": false, 00:21:40.241 "dhchap_key": "key1", 00:21:40.241 "dhchap_ctrlr_key": "ckey2", 00:21:40.241 "allow_unrecognized_csi": false, 00:21:40.241 "method": "bdev_nvme_attach_controller", 00:21:40.241 "req_id": 1 00:21:40.241 } 00:21:40.241 Got JSON-RPC error response 00:21:40.241 response: 00:21:40.241 { 00:21:40.241 "code": -5, 00:21:40.241 "message": "Input/output error" 00:21:40.241 } 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.503 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.765 request: 00:21:40.765 { 00:21:40.765 "name": "nvme0", 00:21:40.765 "trtype": "rdma", 00:21:40.765 "traddr": "192.168.100.8", 00:21:40.765 "adrfam": "ipv4", 00:21:40.765 "trsvcid": "4420", 00:21:40.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.765 "prchk_reftag": false, 00:21:40.765 "prchk_guard": false, 00:21:40.765 "hdgst": false, 00:21:40.765 "ddgst": false, 00:21:40.765 "dhchap_key": "key1", 00:21:40.765 "dhchap_ctrlr_key": "ckey1", 00:21:40.765 "allow_unrecognized_csi": false, 00:21:40.765 "method": "bdev_nvme_attach_controller", 00:21:40.765 "req_id": 1 00:21:40.765 } 00:21:40.765 Got JSON-RPC error response 00:21:40.765 response: 00:21:40.765 { 00:21:40.765 "code": -5, 00:21:40.765 "message": "Input/output error" 00:21:40.765 } 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 316227 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 316227 ']' 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 316227 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.765 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316227 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316227' 00:21:41.025 killing process with pid 316227 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 316227 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 316227 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=341125 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 341125 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 341125 ']' 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.025 04:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 341125 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 341125 ']' 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.284 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.544 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.544 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:41.544 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:41.544 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.544 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 null0 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kIG 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Zyo ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zyo 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0EY 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BXH ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BXH 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZaN 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7w4 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0iP 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.849 04:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.542 nvme0n1 00:21:42.542 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.542 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.542 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.823 { 00:21:42.823 "cntlid": 1, 00:21:42.823 "qid": 0, 00:21:42.823 "state": "enabled", 00:21:42.823 "thread": "nvmf_tgt_poll_group_000", 00:21:42.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:42.823 "listen_address": { 00:21:42.823 "trtype": "RDMA", 00:21:42.823 "adrfam": "IPv4", 00:21:42.823 "traddr": "192.168.100.8", 00:21:42.823 "trsvcid": "4420" 00:21:42.823 }, 00:21:42.823 "peer_address": { 00:21:42.823 "trtype": "RDMA", 00:21:42.823 "adrfam": "IPv4", 00:21:42.823 "traddr": "192.168.100.8", 00:21:42.823 "trsvcid": "59113" 00:21:42.823 }, 00:21:42.823 "auth": { 00:21:42.823 "state": "completed", 00:21:42.823 "digest": "sha512", 00:21:42.823 "dhgroup": "ffdhe8192" 00:21:42.823 } 00:21:42.823 } 00:21:42.823 ]' 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.823 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.103 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:43.103 04:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:43.777 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.064 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.065 04:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.324 request: 00:21:44.324 { 00:21:44.324 "name": "nvme0", 00:21:44.324 "trtype": "rdma", 00:21:44.324 "traddr": "192.168.100.8", 00:21:44.324 "adrfam": "ipv4", 00:21:44.324 "trsvcid": "4420", 00:21:44.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:44.325 "prchk_reftag": false, 00:21:44.325 "prchk_guard": false, 00:21:44.325 "hdgst": false, 00:21:44.325 "ddgst": false, 00:21:44.325 "dhchap_key": "key3", 00:21:44.325 "allow_unrecognized_csi": false, 00:21:44.325 "method": "bdev_nvme_attach_controller", 00:21:44.325 "req_id": 1 00:21:44.325 } 00:21:44.325 Got JSON-RPC error response 00:21:44.325 response: 00:21:44.325 { 00:21:44.325 "code": -5, 00:21:44.325 "message": "Input/output error" 00:21:44.325 } 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.325 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.584 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.585 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.844 request: 00:21:44.844 { 00:21:44.844 "name": "nvme0", 00:21:44.844 "trtype": "rdma", 00:21:44.844 "traddr": "192.168.100.8", 00:21:44.844 "adrfam": "ipv4", 00:21:44.844 "trsvcid": "4420", 00:21:44.844 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:44.844 "prchk_reftag": false, 00:21:44.844 "prchk_guard": false, 00:21:44.844 "hdgst": false, 00:21:44.844 "ddgst": false, 00:21:44.844 "dhchap_key": "key3", 00:21:44.844 "allow_unrecognized_csi": false, 00:21:44.844 "method": "bdev_nvme_attach_controller", 00:21:44.844 "req_id": 1 00:21:44.844 } 00:21:44.844 Got JSON-RPC error response 00:21:44.844 response: 00:21:44.844 { 00:21:44.844 "code": -5, 00:21:44.844 "message": "Input/output error" 00:21:44.844 } 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.844 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.103 04:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.363 request: 00:21:45.363 { 00:21:45.363 "name": "nvme0", 00:21:45.363 "trtype": "rdma", 00:21:45.363 "traddr": "192.168.100.8", 00:21:45.363 "adrfam": "ipv4", 00:21:45.363 "trsvcid": "4420", 00:21:45.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:45.363 "prchk_reftag": false, 00:21:45.363 "prchk_guard": false, 00:21:45.363 "hdgst": false, 00:21:45.363 "ddgst": false, 00:21:45.363 "dhchap_key": "key0", 00:21:45.363 "dhchap_ctrlr_key": "key1", 00:21:45.363 "allow_unrecognized_csi": false, 00:21:45.363 "method": "bdev_nvme_attach_controller", 00:21:45.363 "req_id": 1 00:21:45.363 } 00:21:45.363 Got JSON-RPC error response 00:21:45.363 response: 00:21:45.363 { 00:21:45.363 "code": -5, 00:21:45.363 "message": "Input/output error" 00:21:45.363 } 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:45.363 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:45.623 nvme0n1 00:21:45.623 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:45.623 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:45.623 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.882 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.882 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.882 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.142 04:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.712 nvme0n1 00:21:46.712 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:46.712 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:46.712 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:46.972 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.232 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.232 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:47.232 04:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxNjE5YWIzZjc2ZDlmMWQ3ZDRjOGFjYzQ1M2NhOTVjYjk2ODM4MDMzOTVhODAwNTIxNTdmMTJmZTM0YWY3NYZ1abw=: 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.801 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:48.061 04:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:48.320 request: 00:21:48.320 { 00:21:48.320 "name": "nvme0", 00:21:48.320 "trtype": "rdma", 00:21:48.320 "traddr": "192.168.100.8", 00:21:48.320 "adrfam": "ipv4", 00:21:48.320 "trsvcid": "4420", 00:21:48.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:48.320 "prchk_reftag": false, 00:21:48.320 "prchk_guard": false, 00:21:48.320 "hdgst": false, 00:21:48.321 "ddgst": false, 00:21:48.321 "dhchap_key": "key1", 00:21:48.321 "allow_unrecognized_csi": false, 00:21:48.321 "method": "bdev_nvme_attach_controller", 00:21:48.321 "req_id": 1 00:21:48.321 } 00:21:48.321 Got JSON-RPC error response 00:21:48.321 response: 00:21:48.321 { 00:21:48.321 "code": -5, 00:21:48.321 "message": "Input/output error" 00:21:48.321 } 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:48.580 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:48.581 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:49.149 nvme0n1 00:21:49.149 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:49.149 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:49.149 04:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.409 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.409 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.409 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:49.668 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:49.669 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:49.931 nvme0n1 00:21:49.931 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:49.931 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:49.931 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.190 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.190 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: '' 2s 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: ]] 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGFlNWUwODg4NzVkNzVjNzJlYmUyMmEyYjBkYmM4NWY0gRmq: 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:50.191 04:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:52.729 04:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:52.729 04:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:52.729 04:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:52.729 04:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: 2s 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: ]] 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yzc2ZjYyZTI1ZjlhZmYwODJkNDE0M2E3MjE5NGIwYTc0YWE1YWQxMDkwNmViYjUxUy64PQ==: 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:52.729 04:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:54.636 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.637 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:55.207 nvme0n1 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.207 04:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.775 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:55.775 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:55.775 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:56.034 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.294 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.294 04:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:56.294 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:56.862 request: 00:21:56.862 { 00:21:56.862 "name": "nvme0", 00:21:56.862 "dhchap_key": "key1", 00:21:56.862 "dhchap_ctrlr_key": "key3", 00:21:56.862 "method": "bdev_nvme_set_keys", 00:21:56.862 "req_id": 1 00:21:56.862 } 00:21:56.862 Got JSON-RPC error response 00:21:56.862 response: 00:21:56.862 { 00:21:56.862 "code": -13, 00:21:56.862 "message": "Permission denied" 00:21:56.862 } 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:56.862 04:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:58.243 04:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:58.812 nvme0n1 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.812 04:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:59.381 request: 00:21:59.381 { 00:21:59.381 "name": "nvme0", 00:21:59.381 "dhchap_key": "key2", 00:21:59.381 "dhchap_ctrlr_key": "key0", 00:21:59.381 "method": "bdev_nvme_set_keys", 00:21:59.381 "req_id": 1 00:21:59.381 } 00:21:59.381 Got JSON-RPC error response 00:21:59.381 response: 00:21:59.381 { 00:21:59.381 "code": -13, 00:21:59.381 "message": "Permission denied" 00:21:59.381 } 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:59.381 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.640 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:59.640 04:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:00.577 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:00.578 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:00.578 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 316390 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 316390 ']' 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 316390 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316390 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.837 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.838 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316390' 00:22:00.838 killing process with pid 316390 00:22:00.838 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 316390 00:22:00.838 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 316390 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:01.097 rmmod nvme_rdma 00:22:01.097 rmmod nvme_fabrics 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 341125 ']' 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 341125 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 341125 ']' 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 341125 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.097 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341125 00:22:01.356 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.356 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.356 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341125' 00:22:01.356 killing process with pid 341125 00:22:01.356 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 341125 00:22:01.356 04:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 341125 00:22:01.356 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.356 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:01.356 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kIG /tmp/spdk.key-sha256.0EY /tmp/spdk.key-sha384.ZaN /tmp/spdk.key-sha512.0iP /tmp/spdk.key-sha512.Zyo /tmp/spdk.key-sha384.BXH /tmp/spdk.key-sha256.7w4 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:22:01.356 00:22:01.356 real 2m46.569s 00:22:01.356 user 6m20.876s 00:22:01.356 sys 0m25.013s 00:22:01.356 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.356 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.356 ************************************ 00:22:01.356 END TEST nvmf_auth_target 00:22:01.356 ************************************ 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.615 ************************************ 00:22:01.615 START TEST nvmf_fuzz 00:22:01.615 ************************************ 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:01.615 * Looking for test storage... 00:22:01.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.615 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.616 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.875 --rc genhtml_branch_coverage=1 00:22:01.875 --rc genhtml_function_coverage=1 00:22:01.875 --rc genhtml_legend=1 00:22:01.875 --rc geninfo_all_blocks=1 00:22:01.875 --rc geninfo_unexecuted_blocks=1 00:22:01.875 00:22:01.875 ' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.875 --rc genhtml_branch_coverage=1 00:22:01.875 --rc genhtml_function_coverage=1 00:22:01.875 --rc genhtml_legend=1 00:22:01.875 --rc geninfo_all_blocks=1 00:22:01.875 --rc geninfo_unexecuted_blocks=1 00:22:01.875 00:22:01.875 ' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.875 --rc genhtml_branch_coverage=1 00:22:01.875 --rc genhtml_function_coverage=1 00:22:01.875 --rc genhtml_legend=1 00:22:01.875 --rc geninfo_all_blocks=1 00:22:01.875 --rc geninfo_unexecuted_blocks=1 00:22:01.875 00:22:01.875 ' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.875 --rc genhtml_branch_coverage=1 00:22:01.875 --rc genhtml_function_coverage=1 00:22:01.875 --rc genhtml_legend=1 00:22:01.875 --rc geninfo_all_blocks=1 00:22:01.875 --rc geninfo_unexecuted_blocks=1 00:22:01.875 00:22:01.875 ' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.875 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.876 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.876 04:38:40 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:10.019 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:10.019 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.019 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:10.020 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:10.020 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:10.020 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:10.020 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:10.020 altname enp217s0f0np0 00:22:10.020 altname ens818f0np0 00:22:10.020 inet 192.168.100.8/24 scope global mlx_0_0 00:22:10.020 valid_lft forever preferred_lft forever 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:10.020 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:10.020 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:10.020 altname enp217s0f1np1 00:22:10.020 altname ens818f1np1 00:22:10.020 inet 192.168.100.9/24 scope global mlx_0_1 00:22:10.020 valid_lft forever preferred_lft forever 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:10.020 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:10.021 192.168.100.9' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:10.021 192.168.100.9' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:10.021 192.168.100.9' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=347913 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 347913 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 347913 ']' 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.021 04:38:47 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 Malloc0 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:10.021 04:38:48 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:22:42.105 Fuzzing completed. Shutting down the fuzz application 00:22:42.105 00:22:42.105 Dumping successful admin opcodes: 00:22:42.105 8, 9, 10, 24, 00:22:42.105 Dumping successful io opcodes: 00:22:42.105 0, 9, 00:22:42.105 NS: 0x2000008f1f00 I/O qp, Total commands completed: 1013129, total successful commands: 5938, random_seed: 206666368 00:22:42.105 NS: 0x2000008f1f00 admin qp, Total commands completed: 131856, total successful commands: 1072, random_seed: 2107737280 00:22:42.105 04:39:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:42.105 Fuzzing completed. Shutting down the fuzz application 00:22:42.105 00:22:42.105 Dumping successful admin opcodes: 00:22:42.105 24, 00:22:42.105 Dumping successful io opcodes: 00:22:42.105 00:22:42.105 NS: 0x2000008f1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1461288794 00:22:42.105 NS: 0x2000008f1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1461350744 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:42.105 rmmod nvme_rdma 00:22:42.105 rmmod nvme_fabrics 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 347913 ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 347913 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 347913 ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 347913 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347913 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.105 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347913' 00:22:42.105 killing process with pid 347913 00:22:42.106 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 347913 00:22:42.106 04:39:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 347913 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:42.106 00:22:42.106 real 0m39.882s 00:22:42.106 user 0m49.886s 00:22:42.106 sys 0m21.442s 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:42.106 ************************************ 00:22:42.106 END TEST nvmf_fuzz 00:22:42.106 ************************************ 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.106 ************************************ 00:22:42.106 START TEST nvmf_multiconnection 00:22:42.106 ************************************ 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:42.106 * Looking for test storage... 00:22:42.106 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.106 --rc genhtml_branch_coverage=1 00:22:42.106 --rc genhtml_function_coverage=1 00:22:42.106 --rc genhtml_legend=1 00:22:42.106 --rc geninfo_all_blocks=1 00:22:42.106 --rc geninfo_unexecuted_blocks=1 00:22:42.106 00:22:42.106 ' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.106 --rc genhtml_branch_coverage=1 00:22:42.106 --rc genhtml_function_coverage=1 00:22:42.106 --rc genhtml_legend=1 00:22:42.106 --rc geninfo_all_blocks=1 00:22:42.106 --rc geninfo_unexecuted_blocks=1 00:22:42.106 00:22:42.106 ' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.106 --rc genhtml_branch_coverage=1 00:22:42.106 --rc genhtml_function_coverage=1 00:22:42.106 --rc genhtml_legend=1 00:22:42.106 --rc geninfo_all_blocks=1 00:22:42.106 --rc geninfo_unexecuted_blocks=1 00:22:42.106 00:22:42.106 ' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.106 --rc genhtml_branch_coverage=1 00:22:42.106 --rc genhtml_function_coverage=1 00:22:42.106 --rc genhtml_legend=1 00:22:42.106 --rc geninfo_all_blocks=1 00:22:42.106 --rc geninfo_unexecuted_blocks=1 00:22:42.106 00:22:42.106 ' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.106 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.107 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.107 04:39:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:48.687 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:48.688 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:48.688 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:48.688 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:48.688 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:48.688 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.688 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:48.688 altname enp217s0f0np0 00:22:48.688 altname ens818f0np0 00:22:48.688 inet 192.168.100.8/24 scope global mlx_0_0 00:22:48.688 valid_lft forever preferred_lft forever 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:48.688 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.688 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:48.688 altname enp217s0f1np1 00:22:48.688 altname ens818f1np1 00:22:48.688 inet 192.168.100.9/24 scope global mlx_0_1 00:22:48.688 valid_lft forever preferred_lft forever 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:48.688 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:48.689 192.168.100.9' 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:48.689 192.168.100.9' 00:22:48.689 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:48.949 192.168.100.9' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=357203 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 357203 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 357203 ']' 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.949 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:48.949 [2024-11-17 04:39:27.618152] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:48.949 [2024-11-17 04:39:27.618209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.949 [2024-11-17 04:39:27.712086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.949 [2024-11-17 04:39:27.736000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.949 [2024-11-17 04:39:27.736041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.949 [2024-11-17 04:39:27.736051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.949 [2024-11-17 04:39:27.736061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.949 [2024-11-17 04:39:27.736068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.949 [2024-11-17 04:39:27.737855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.949 [2024-11-17 04:39:27.737981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.949 [2024-11-17 04:39:27.738022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.949 [2024-11-17 04:39:27.738024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.208 04:39:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.208 [2024-11-17 04:39:27.903204] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x121fc50/0x1224100) succeed. 00:22:49.208 [2024-11-17 04:39:27.912275] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1221290/0x12657a0) succeed. 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 Malloc1 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 [2024-11-17 04:39:28.097746] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 Malloc2 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 Malloc3 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 Malloc4 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.468 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.468 Malloc5 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.469 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.728 Malloc6 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.728 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 Malloc7 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 Malloc8 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 Malloc9 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 Malloc10 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.729 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.729 Malloc11 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:49.988 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.989 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:49.989 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.989 04:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:50.942 04:39:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:50.942 04:39:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:50.942 04:39:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.942 04:39:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:50.942 04:39:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.848 04:39:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:53.785 04:39:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:53.785 04:39:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:53.785 04:39:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:53.785 04:39:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:53.785 04:39:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.322 04:39:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:56.889 04:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:56.889 04:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:56.889 04:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.889 04:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:56.889 04:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:58.795 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:58.795 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:58.795 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:22:59.054 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:59.055 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:59.055 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:59.055 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.055 04:39:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:59.993 04:39:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:59.993 04:39:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:59.993 04:39:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.993 04:39:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:59.993 04:39:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.911 04:39:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:02.849 04:39:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:02.849 04:39:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:02.849 04:39:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:02.849 04:39:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:02.849 04:39:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.390 04:39:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:05.959 04:39:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:05.959 04:39:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:05.959 04:39:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:05.959 04:39:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:05.959 04:39:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:07.866 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.867 04:39:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:09.247 04:39:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:09.247 04:39:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:09.247 04:39:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:09.247 04:39:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:09.247 04:39:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:11.154 04:39:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:12.092 04:39:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:12.092 04:39:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:12.092 04:39:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:12.092 04:39:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:12.092 04:39:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.013 04:39:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:14.952 04:39:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:14.952 04:39:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:14.952 04:39:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.952 04:39:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:14.952 04:39:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:17.489 04:39:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:18.058 04:39:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:18.058 04:39:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:18.058 04:39:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.058 04:39:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:18.058 04:39:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:19.965 04:39:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:23:21.343 04:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:21.343 04:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:21.343 04:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.343 04:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:21.343 04:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:23.248 04:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:23.248 [global] 00:23:23.248 thread=1 00:23:23.248 invalidate=1 00:23:23.248 rw=read 00:23:23.248 time_based=1 00:23:23.248 runtime=10 00:23:23.248 ioengine=libaio 00:23:23.248 direct=1 00:23:23.248 bs=262144 00:23:23.248 iodepth=64 00:23:23.248 norandommap=1 00:23:23.248 numjobs=1 00:23:23.248 00:23:23.248 [job0] 00:23:23.248 filename=/dev/nvme0n1 00:23:23.248 [job1] 00:23:23.248 filename=/dev/nvme10n1 00:23:23.248 [job2] 00:23:23.248 filename=/dev/nvme1n1 00:23:23.248 [job3] 00:23:23.249 filename=/dev/nvme2n1 00:23:23.249 [job4] 00:23:23.249 filename=/dev/nvme3n1 00:23:23.249 [job5] 00:23:23.249 filename=/dev/nvme4n1 00:23:23.249 [job6] 00:23:23.249 filename=/dev/nvme5n1 00:23:23.249 [job7] 00:23:23.249 filename=/dev/nvme6n1 00:23:23.249 [job8] 00:23:23.249 filename=/dev/nvme7n1 00:23:23.249 [job9] 00:23:23.249 filename=/dev/nvme8n1 00:23:23.249 [job10] 00:23:23.249 filename=/dev/nvme9n1 00:23:23.249 Could not set queue depth (nvme0n1) 00:23:23.249 Could not set queue depth (nvme10n1) 00:23:23.249 Could not set queue depth (nvme1n1) 00:23:23.249 Could not set queue depth (nvme2n1) 00:23:23.249 Could not set queue depth (nvme3n1) 00:23:23.249 Could not set queue depth (nvme4n1) 00:23:23.249 Could not set queue depth (nvme5n1) 00:23:23.249 Could not set queue depth (nvme6n1) 00:23:23.249 Could not set queue depth (nvme7n1) 00:23:23.249 Could not set queue depth (nvme8n1) 00:23:23.249 Could not set queue depth (nvme9n1) 00:23:23.506 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:23.507 fio-3.35 00:23:23.507 Starting 11 threads 00:23:35.724 00:23:35.725 job0: (groupid=0, jobs=1): err= 0: pid=363432: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=773, BW=193MiB/s (203MB/s)(1946MiB/10057msec) 00:23:35.725 slat (usec): min=14, max=40748, avg=1280.97, stdev=4322.80 00:23:35.725 clat (msec): min=11, max=134, avg=81.33, stdev= 8.20 00:23:35.725 lat (msec): min=11, max=134, avg=82.61, stdev= 9.18 00:23:35.725 clat percentiles (msec): 00:23:35.725 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 79], 00:23:35.725 | 30.00th=[ 81], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 82], 00:23:35.725 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 93], 00:23:35.725 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 127], 99.95th=[ 136], 00:23:35.725 | 99.99th=[ 136] 00:23:35.725 bw ( KiB/s): min=187392, max=223232, per=5.04%, avg=197632.00, stdev=9571.44, samples=20 00:23:35.725 iops : min= 732, max= 872, avg=772.00, stdev=37.39, samples=20 00:23:35.725 lat (msec) : 20=0.32%, 50=0.30%, 100=96.42%, 250=2.97% 00:23:35.725 cpu : usr=0.29%, sys=3.59%, ctx=1449, majf=0, minf=3659 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=7783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job1: (groupid=0, jobs=1): err= 0: pid=363433: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=2110, BW=528MiB/s (553MB/s)(5297MiB/10038msec) 00:23:35.725 slat (usec): min=10, max=16576, avg=468.46, stdev=1256.59 00:23:35.725 clat (usec): min=2305, max=78212, avg=29825.16, stdev=14814.25 00:23:35.725 lat (usec): min=2541, max=78237, avg=30293.62, stdev=15070.76 00:23:35.725 clat percentiles (usec): 00:23:35.725 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14484], 20.00th=[15008], 00:23:35.725 | 30.00th=[15401], 40.00th=[15926], 50.00th=[29492], 60.00th=[34866], 00:23:35.725 | 70.00th=[44303], 80.00th=[46924], 90.00th=[47973], 95.00th=[49546], 00:23:35.725 | 99.00th=[54264], 99.50th=[58459], 99.90th=[64750], 99.95th=[66323], 00:23:35.725 | 99.99th=[78119] 00:23:35.725 bw ( KiB/s): min=323584, max=1100800, per=13.80%, avg=540845.30, stdev=296188.60, samples=20 00:23:35.725 iops : min= 1264, max= 4300, avg=2112.65, stdev=1156.95, samples=20 00:23:35.725 lat (msec) : 4=0.04%, 10=0.27%, 20=45.27%, 50=50.22%, 100=4.20% 00:23:35.725 cpu : usr=0.33%, sys=5.66%, ctx=3882, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=21186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job2: (groupid=0, jobs=1): err= 0: pid=363436: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=776, BW=194MiB/s (204MB/s)(1954MiB/10057msec) 00:23:35.725 slat (usec): min=13, max=46216, avg=1271.20, stdev=3635.55 00:23:35.725 clat (msec): min=11, max=141, avg=81.01, stdev= 8.04 00:23:35.725 lat (msec): min=11, max=141, avg=82.28, stdev= 8.76 00:23:35.725 clat percentiles (msec): 00:23:35.725 | 1.00th=[ 65], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 79], 00:23:35.725 | 30.00th=[ 81], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 82], 00:23:35.725 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 91], 00:23:35.725 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 133], 99.95th=[ 136], 00:23:35.725 | 99.99th=[ 142] 00:23:35.725 bw ( KiB/s): min=186368, max=219648, per=5.06%, avg=198425.60, stdev=7525.70, samples=20 00:23:35.725 iops : min= 728, max= 858, avg=775.10, stdev=29.40, samples=20 00:23:35.725 lat (msec) : 20=0.50%, 50=0.28%, 100=98.05%, 250=1.16% 00:23:35.725 cpu : usr=0.41%, sys=3.78%, ctx=1518, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=7814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job3: (groupid=0, jobs=1): err= 0: pid=363437: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=1498, BW=375MiB/s (393MB/s)(3762MiB/10040msec) 00:23:35.725 slat (usec): min=11, max=11274, avg=649.95, stdev=1521.34 00:23:35.725 clat (usec): min=12909, max=84020, avg=42007.81, stdev=9418.93 00:23:35.725 lat (usec): min=13155, max=84056, avg=42657.76, stdev=9636.08 00:23:35.725 clat percentiles (usec): 00:23:35.725 | 1.00th=[27919], 5.00th=[29492], 10.00th=[30016], 20.00th=[30802], 00:23:35.725 | 30.00th=[32375], 40.00th=[42730], 50.00th=[44827], 60.00th=[46400], 00:23:35.725 | 70.00th=[47449], 80.00th=[48497], 90.00th=[51119], 95.00th=[57934], 00:23:35.725 | 99.00th=[65799], 99.50th=[67634], 99.90th=[77071], 99.95th=[80217], 00:23:35.725 | 99.99th=[84411] 00:23:35.725 bw ( KiB/s): min=248817, max=529920, per=9.78%, avg=383589.65, stdev=81840.86, samples=20 00:23:35.725 iops : min= 971, max= 2070, avg=1498.35, stdev=319.77, samples=20 00:23:35.725 lat (msec) : 20=0.29%, 50=87.70%, 100=12.02% 00:23:35.725 cpu : usr=0.54%, sys=6.18%, ctx=3085, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=15046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job4: (groupid=0, jobs=1): err= 0: pid=363438: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=775, BW=194MiB/s (203MB/s)(1949MiB/10056msec) 00:23:35.725 slat (usec): min=17, max=27273, avg=1279.22, stdev=3293.53 00:23:35.725 clat (msec): min=13, max=129, avg=81.21, stdev= 7.26 00:23:35.725 lat (msec): min=14, max=129, avg=82.49, stdev= 7.89 00:23:35.725 clat percentiles (msec): 00:23:35.725 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 79], 00:23:35.725 | 30.00th=[ 81], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 82], 00:23:35.725 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 89], 95.00th=[ 93], 00:23:35.725 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 129], 00:23:35.725 | 99.99th=[ 130] 00:23:35.725 bw ( KiB/s): min=189440, max=220160, per=5.05%, avg=197913.60, stdev=8254.87, samples=20 00:23:35.725 iops : min= 740, max= 860, avg=773.10, stdev=32.25, samples=20 00:23:35.725 lat (msec) : 20=0.26%, 50=0.30%, 100=97.78%, 250=1.67% 00:23:35.725 cpu : usr=0.37%, sys=3.87%, ctx=1515, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=7794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job5: (groupid=0, jobs=1): err= 0: pid=363441: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=1170, BW=293MiB/s (307MB/s)(2942MiB/10057msec) 00:23:35.725 slat (usec): min=12, max=54534, avg=839.86, stdev=3022.00 00:23:35.725 clat (msec): min=12, max=137, avg=53.79, stdev=15.42 00:23:35.725 lat (msec): min=13, max=144, avg=54.63, stdev=15.88 00:23:35.725 clat percentiles (msec): 00:23:35.725 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:23:35.725 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 49], 00:23:35.725 | 70.00th=[ 51], 80.00th=[ 64], 90.00th=[ 83], 95.00th=[ 85], 00:23:35.725 | 99.00th=[ 93], 99.50th=[ 117], 99.90th=[ 131], 99.95th=[ 133], 00:23:35.725 | 99.99th=[ 138] 00:23:35.725 bw ( KiB/s): min=180224, max=378880, per=7.64%, avg=299698.75, stdev=68999.41, samples=20 00:23:35.725 iops : min= 704, max= 1480, avg=1170.65, stdev=269.56, samples=20 00:23:35.725 lat (msec) : 20=0.24%, 50=70.02%, 100=29.04%, 250=0.70% 00:23:35.725 cpu : usr=0.37%, sys=5.26%, ctx=2250, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=11769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job6: (groupid=0, jobs=1): err= 0: pid=363445: Sun Nov 17 04:40:12 2024 00:23:35.725 read: IOPS=1805, BW=451MiB/s (473MB/s)(4527MiB/10027msec) 00:23:35.725 slat (usec): min=10, max=18052, avg=545.13, stdev=1436.91 00:23:35.725 clat (usec): min=557, max=73301, avg=34854.59, stdev=10976.22 00:23:35.725 lat (usec): min=605, max=78006, avg=35399.72, stdev=11202.03 00:23:35.725 clat percentiles (usec): 00:23:35.725 | 1.00th=[13173], 5.00th=[15664], 10.00th=[16581], 20.00th=[29230], 00:23:35.725 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31589], 60.00th=[33817], 00:23:35.725 | 70.00th=[42206], 80.00th=[44827], 90.00th=[46924], 95.00th=[50594], 00:23:35.725 | 99.00th=[64226], 99.50th=[65799], 99.90th=[69731], 99.95th=[69731], 00:23:35.725 | 99.99th=[72877] 00:23:35.725 bw ( KiB/s): min=250356, max=820224, per=11.78%, avg=461977.00, stdev=120984.20, samples=20 00:23:35.725 iops : min= 977, max= 3204, avg=1804.55, stdev=472.68, samples=20 00:23:35.725 lat (usec) : 750=0.01% 00:23:35.725 lat (msec) : 2=0.07%, 4=0.10%, 10=0.47%, 20=10.82%, 50=83.30% 00:23:35.725 lat (msec) : 100=5.23% 00:23:35.725 cpu : usr=0.43%, sys=5.68%, ctx=3643, majf=0, minf=4097 00:23:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.725 issued rwts: total=18108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.725 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.725 job7: (groupid=0, jobs=1): err= 0: pid=363446: Sun Nov 17 04:40:12 2024 00:23:35.726 read: IOPS=988, BW=247MiB/s (259MB/s)(2480MiB/10037msec) 00:23:35.726 slat (usec): min=11, max=26873, avg=992.52, stdev=2716.31 00:23:35.726 clat (msec): min=12, max=116, avg=63.70, stdev=20.96 00:23:35.726 lat (msec): min=12, max=116, avg=64.69, stdev=21.43 00:23:35.726 clat percentiles (msec): 00:23:35.726 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 45], 00:23:35.726 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 74], 60.00th=[ 80], 00:23:35.726 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 88], 00:23:35.726 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 107], 99.95th=[ 111], 00:23:35.726 | 99.99th=[ 116] 00:23:35.726 bw ( KiB/s): min=190976, max=466432, per=6.44%, avg=252384.50, stdev=89738.77, samples=20 00:23:35.726 iops : min= 746, max= 1822, avg=985.85, stdev=350.56, samples=20 00:23:35.726 lat (msec) : 20=0.43%, 50=38.51%, 100=60.58%, 250=0.47% 00:23:35.726 cpu : usr=0.45%, sys=3.98%, ctx=2080, majf=0, minf=4097 00:23:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.726 issued rwts: total=9921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.726 job8: (groupid=0, jobs=1): err= 0: pid=363449: Sun Nov 17 04:40:12 2024 00:23:35.726 read: IOPS=775, BW=194MiB/s (203MB/s)(1949MiB/10057msec) 00:23:35.726 slat (usec): min=13, max=31247, avg=1279.17, stdev=3374.48 00:23:35.726 clat (msec): min=13, max=128, avg=81.19, stdev= 7.27 00:23:35.726 lat (msec): min=13, max=128, avg=82.47, stdev= 7.94 00:23:35.726 clat percentiles (msec): 00:23:35.726 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 79], 00:23:35.726 | 30.00th=[ 81], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:23:35.726 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 92], 00:23:35.726 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 121], 99.95th=[ 129], 00:23:35.726 | 99.99th=[ 129] 00:23:35.726 bw ( KiB/s): min=180224, max=218112, per=5.05%, avg=197959.15, stdev=8870.52, samples=20 00:23:35.726 iops : min= 704, max= 852, avg=773.25, stdev=34.65, samples=20 00:23:35.726 lat (msec) : 20=0.27%, 50=0.31%, 100=98.06%, 250=1.36% 00:23:35.726 cpu : usr=0.30%, sys=3.95%, ctx=1499, majf=0, minf=4097 00:23:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.726 issued rwts: total=7796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.726 job9: (groupid=0, jobs=1): err= 0: pid=363450: Sun Nov 17 04:40:12 2024 00:23:35.726 read: IOPS=2135, BW=534MiB/s (560MB/s)(5345MiB/10014msec) 00:23:35.726 slat (usec): min=10, max=49915, avg=452.87, stdev=1243.28 00:23:35.726 clat (msec): min=6, max=146, avg=29.49, stdev=14.50 00:23:35.726 lat (msec): min=7, max=146, avg=29.94, stdev=14.73 00:23:35.726 clat percentiles (msec): 00:23:35.726 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:23:35.726 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 30], 60.00th=[ 32], 00:23:35.726 | 70.00th=[ 34], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 51], 00:23:35.726 | 99.00th=[ 68], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 104], 00:23:35.726 | 99.99th=[ 107] 00:23:35.726 bw ( KiB/s): min=249856, max=1053184, per=13.92%, avg=545715.20, stdev=270161.11, samples=20 00:23:35.726 iops : min= 976, max= 4114, avg=2131.80, stdev=1055.29, samples=20 00:23:35.726 lat (msec) : 10=0.08%, 20=39.92%, 50=54.83%, 100=5.08%, 250=0.08% 00:23:35.726 cpu : usr=0.41%, sys=6.46%, ctx=4175, majf=0, minf=4097 00:23:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.726 issued rwts: total=21381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.726 job10: (groupid=0, jobs=1): err= 0: pid=363451: Sun Nov 17 04:40:12 2024 00:23:35.726 read: IOPS=2535, BW=634MiB/s (665MB/s)(6355MiB/10025msec) 00:23:35.726 slat (usec): min=10, max=33450, avg=382.92, stdev=1004.36 00:23:35.726 clat (usec): min=792, max=61694, avg=24832.51, stdev=10090.68 00:23:35.726 lat (usec): min=881, max=94642, avg=25215.43, stdev=10270.44 00:23:35.726 clat percentiles (usec): 00:23:35.726 | 1.00th=[ 6849], 5.00th=[14615], 10.00th=[15139], 20.00th=[15664], 00:23:35.726 | 30.00th=[16057], 40.00th=[16450], 50.00th=[27657], 60.00th=[29754], 00:23:35.726 | 70.00th=[30802], 80.00th=[31589], 90.00th=[40633], 95.00th=[42730], 00:23:35.726 | 99.00th=[47449], 99.50th=[50070], 99.90th=[57410], 99.95th=[61080], 00:23:35.726 | 99.99th=[61604] 00:23:35.726 bw ( KiB/s): min=381440, max=1022464, per=16.56%, avg=649164.80, stdev=241208.49, samples=20 00:23:35.726 iops : min= 1490, max= 3994, avg=2535.80, stdev=942.22, samples=20 00:23:35.726 lat (usec) : 1000=0.03% 00:23:35.726 lat (msec) : 2=0.18%, 4=0.24%, 10=1.10%, 20=46.23%, 50=51.73% 00:23:35.726 lat (msec) : 100=0.50% 00:23:35.726 cpu : usr=0.36%, sys=6.35%, ctx=5415, majf=0, minf=4097 00:23:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:35.726 issued rwts: total=25421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:35.726 00:23:35.726 Run status group 0 (all jobs): 00:23:35.726 READ: bw=3829MiB/s (4015MB/s), 193MiB/s-634MiB/s (203MB/s-665MB/s), io=37.6GiB (40.4GB), run=10014-10057msec 00:23:35.726 00:23:35.726 Disk stats (read/write): 00:23:35.726 nvme0n1: ios=15313/0, merge=0/0, ticks=1222468/0, in_queue=1222468, util=96.81% 00:23:35.726 nvme10n1: ios=41957/0, merge=0/0, ticks=1217432/0, in_queue=1217432, util=97.02% 00:23:35.726 nvme1n1: ios=15322/0, merge=0/0, ticks=1221700/0, in_queue=1221700, util=97.38% 00:23:35.726 nvme2n1: ios=29689/0, merge=0/0, ticks=1220846/0, in_queue=1220846, util=97.55% 00:23:35.726 nvme3n1: ios=15293/0, merge=0/0, ticks=1221564/0, in_queue=1221564, util=97.63% 00:23:35.726 nvme4n1: ios=23258/0, merge=0/0, ticks=1219394/0, in_queue=1219394, util=98.04% 00:23:35.726 nvme5n1: ios=35664/0, merge=0/0, ticks=1220846/0, in_queue=1220846, util=98.23% 00:23:35.726 nvme6n1: ios=19427/0, merge=0/0, ticks=1223516/0, in_queue=1223516, util=98.35% 00:23:35.726 nvme7n1: ios=15295/0, merge=0/0, ticks=1221426/0, in_queue=1221426, util=98.85% 00:23:35.726 nvme8n1: ios=41786/0, merge=0/0, ticks=1221041/0, in_queue=1221041, util=99.08% 00:23:35.726 nvme9n1: ios=50284/0, merge=0/0, ticks=1217511/0, in_queue=1217511, util=99.22% 00:23:35.726 04:40:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:35.726 [global] 00:23:35.726 thread=1 00:23:35.726 invalidate=1 00:23:35.726 rw=randwrite 00:23:35.726 time_based=1 00:23:35.726 runtime=10 00:23:35.726 ioengine=libaio 00:23:35.726 direct=1 00:23:35.726 bs=262144 00:23:35.726 iodepth=64 00:23:35.726 norandommap=1 00:23:35.726 numjobs=1 00:23:35.726 00:23:35.726 [job0] 00:23:35.726 filename=/dev/nvme0n1 00:23:35.726 [job1] 00:23:35.726 filename=/dev/nvme10n1 00:23:35.726 [job2] 00:23:35.726 filename=/dev/nvme1n1 00:23:35.726 [job3] 00:23:35.726 filename=/dev/nvme2n1 00:23:35.726 [job4] 00:23:35.726 filename=/dev/nvme3n1 00:23:35.726 [job5] 00:23:35.726 filename=/dev/nvme4n1 00:23:35.726 [job6] 00:23:35.726 filename=/dev/nvme5n1 00:23:35.726 [job7] 00:23:35.726 filename=/dev/nvme6n1 00:23:35.726 [job8] 00:23:35.726 filename=/dev/nvme7n1 00:23:35.726 [job9] 00:23:35.726 filename=/dev/nvme8n1 00:23:35.726 [job10] 00:23:35.726 filename=/dev/nvme9n1 00:23:35.726 Could not set queue depth (nvme0n1) 00:23:35.726 Could not set queue depth (nvme10n1) 00:23:35.726 Could not set queue depth (nvme1n1) 00:23:35.726 Could not set queue depth (nvme2n1) 00:23:35.726 Could not set queue depth (nvme3n1) 00:23:35.726 Could not set queue depth (nvme4n1) 00:23:35.726 Could not set queue depth (nvme5n1) 00:23:35.726 Could not set queue depth (nvme6n1) 00:23:35.726 Could not set queue depth (nvme7n1) 00:23:35.726 Could not set queue depth (nvme8n1) 00:23:35.726 Could not set queue depth (nvme9n1) 00:23:35.726 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.726 fio-3.35 00:23:35.726 Starting 11 threads 00:23:45.713 00:23:45.713 job0: (groupid=0, jobs=1): err= 0: pid=365173: Sun Nov 17 04:40:23 2024 00:23:45.713 write: IOPS=948, BW=237MiB/s (249MB/s)(2388MiB/10067msec); 0 zone resets 00:23:45.713 slat (usec): min=28, max=29819, avg=1029.71, stdev=1967.64 00:23:45.713 clat (msec): min=2, max=155, avg=66.41, stdev=14.76 00:23:45.713 lat (msec): min=2, max=155, avg=67.44, stdev=14.97 00:23:45.713 clat percentiles (msec): 00:23:45.713 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:23:45.713 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 70], 00:23:45.713 | 70.00th=[ 75], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 93], 00:23:45.713 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 140], 99.95th=[ 150], 00:23:45.713 | 99.99th=[ 157] 00:23:45.713 bw ( KiB/s): min=167424, max=291328, per=7.00%, avg=242892.80, stdev=46218.84, samples=20 00:23:45.713 iops : min= 654, max= 1138, avg=948.80, stdev=180.54, samples=20 00:23:45.713 lat (msec) : 4=0.10%, 10=0.08%, 20=0.08%, 50=0.89%, 100=97.13% 00:23:45.713 lat (msec) : 250=1.71% 00:23:45.713 cpu : usr=2.24%, sys=4.44%, ctx=2397, majf=0, minf=1 00:23:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.713 issued rwts: total=0,9551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.713 job1: (groupid=0, jobs=1): err= 0: pid=365185: Sun Nov 17 04:40:23 2024 00:23:45.713 write: IOPS=1418, BW=355MiB/s (372MB/s)(3562MiB/10042msec); 0 zone resets 00:23:45.713 slat (usec): min=21, max=6503, avg=697.42, stdev=1260.77 00:23:45.713 clat (usec): min=4923, max=98481, avg=44396.35, stdev=9341.47 00:23:45.713 lat (usec): min=4977, max=98537, avg=45093.78, stdev=9444.72 00:23:45.713 clat percentiles (usec): 00:23:45.713 | 1.00th=[33817], 5.00th=[34866], 10.00th=[35914], 20.00th=[36963], 00:23:45.713 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[44827], 00:23:45.713 | 70.00th=[53740], 80.00th=[55313], 90.00th=[56886], 95.00th=[57934], 00:23:45.713 | 99.00th=[60556], 99.50th=[65274], 99.90th=[88605], 99.95th=[91751], 00:23:45.713 | 99.99th=[98042] 00:23:45.713 bw ( KiB/s): min=278528, max=437248, per=10.46%, avg=363081.60, stdev=67528.59, samples=20 00:23:45.713 iops : min= 1088, max= 1708, avg=1418.25, stdev=263.83, samples=20 00:23:45.713 lat (msec) : 10=0.01%, 20=0.08%, 50=61.57%, 100=38.33% 00:23:45.713 cpu : usr=3.07%, sys=5.52%, ctx=3512, majf=0, minf=1 00:23:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.713 issued rwts: total=0,14247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.713 job2: (groupid=0, jobs=1): err= 0: pid=365186: Sun Nov 17 04:40:23 2024 00:23:45.713 write: IOPS=1509, BW=377MiB/s (396MB/s)(3780MiB/10017msec); 0 zone resets 00:23:45.713 slat (usec): min=16, max=49153, avg=608.61, stdev=1746.65 00:23:45.713 clat (usec): min=654, max=138367, avg=41771.57, stdev=22127.52 00:23:45.713 lat (usec): min=936, max=146124, avg=42380.19, stdev=22481.85 00:23:45.713 clat percentiles (msec): 00:23:45.713 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 19], 00:23:45.713 | 30.00th=[ 19], 40.00th=[ 35], 50.00th=[ 52], 60.00th=[ 54], 00:23:45.713 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 59], 95.00th=[ 88], 00:23:45.713 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 113], 99.95th=[ 132], 00:23:45.713 | 99.99th=[ 138] 00:23:45.713 bw ( KiB/s): min=173568, max=909824, per=11.11%, avg=385456.20, stdev=222085.72, samples=20 00:23:45.713 iops : min= 678, max= 3554, avg=1505.65, stdev=867.54, samples=20 00:23:45.713 lat (usec) : 750=0.01%, 1000=0.03% 00:23:45.713 lat (msec) : 2=0.17%, 4=0.23%, 10=0.64%, 20=34.51%, 50=12.92% 00:23:45.713 lat (msec) : 100=50.64%, 250=0.85% 00:23:45.713 cpu : usr=2.94%, sys=5.58%, ctx=3723, majf=0, minf=1 00:23:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.713 issued rwts: total=0,15121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.713 job3: (groupid=0, jobs=1): err= 0: pid=365187: Sun Nov 17 04:40:23 2024 00:23:45.713 write: IOPS=994, BW=249MiB/s (261MB/s)(2502MiB/10064msec); 0 zone resets 00:23:45.713 slat (usec): min=24, max=32380, avg=973.03, stdev=2071.62 00:23:45.713 clat (msec): min=7, max=151, avg=63.37, stdev=17.71 00:23:45.713 lat (msec): min=7, max=151, avg=64.34, stdev=18.02 00:23:45.713 clat percentiles (msec): 00:23:45.713 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 53], 00:23:45.713 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 59], 00:23:45.713 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 94], 00:23:45.713 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 142], 99.95th=[ 148], 00:23:45.713 | 99.99th=[ 153] 00:23:45.713 bw ( KiB/s): min=166400, max=425984, per=7.33%, avg=254540.80, stdev=66791.82, samples=20 00:23:45.713 iops : min= 650, max= 1664, avg=994.30, stdev=260.91, samples=20 00:23:45.713 lat (msec) : 10=0.04%, 20=0.20%, 50=11.18%, 100=86.74%, 250=1.84% 00:23:45.713 cpu : usr=2.31%, sys=3.92%, ctx=2433, majf=0, minf=1 00:23:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.713 issued rwts: total=0,10006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.713 job4: (groupid=0, jobs=1): err= 0: pid=365188: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=970, BW=243MiB/s (254MB/s)(2441MiB/10063msec); 0 zone resets 00:23:45.714 slat (usec): min=26, max=47618, avg=984.18, stdev=2139.98 00:23:45.714 clat (msec): min=4, max=147, avg=64.94, stdev=16.48 00:23:45.714 lat (msec): min=4, max=147, avg=65.93, stdev=16.74 00:23:45.714 clat percentiles (msec): 00:23:45.714 | 1.00th=[ 27], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:23:45.714 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:23:45.714 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 94], 00:23:45.714 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 138], 99.95th=[ 146], 00:23:45.714 | 99.99th=[ 148] 00:23:45.714 bw ( KiB/s): min=170496, max=297472, per=7.16%, avg=248371.20, stdev=48718.06, samples=20 00:23:45.714 iops : min= 666, max= 1162, avg=970.20, stdev=190.30, samples=20 00:23:45.714 lat (msec) : 10=0.12%, 20=0.78%, 50=1.32%, 100=95.85%, 250=1.93% 00:23:45.714 cpu : usr=2.35%, sys=4.13%, ctx=2524, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,9765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job5: (groupid=0, jobs=1): err= 0: pid=365189: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1418, BW=355MiB/s (372MB/s)(3563MiB/10043msec); 0 zone resets 00:23:45.714 slat (usec): min=24, max=7766, avg=697.28, stdev=1250.26 00:23:45.714 clat (usec): min=12349, max=96205, avg=44391.08, stdev=9270.43 00:23:45.714 lat (usec): min=12398, max=96263, avg=45088.36, stdev=9375.99 00:23:45.714 clat percentiles (usec): 00:23:45.714 | 1.00th=[33817], 5.00th=[34866], 10.00th=[35914], 20.00th=[36963], 00:23:45.714 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38536], 60.00th=[45351], 00:23:45.714 | 70.00th=[53216], 80.00th=[55313], 90.00th=[56886], 95.00th=[57934], 00:23:45.714 | 99.00th=[60031], 99.50th=[64750], 99.90th=[83362], 99.95th=[89654], 00:23:45.714 | 99.99th=[95945] 00:23:45.714 bw ( KiB/s): min=281088, max=439296, per=10.47%, avg=363187.20, stdev=67387.59, samples=20 00:23:45.714 iops : min= 1098, max= 1716, avg=1418.70, stdev=263.23, samples=20 00:23:45.714 lat (msec) : 20=0.06%, 50=61.56%, 100=38.39% 00:23:45.714 cpu : usr=3.33%, sys=5.29%, ctx=3531, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,14250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job6: (groupid=0, jobs=1): err= 0: pid=365190: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1736, BW=434MiB/s (455MB/s)(4360MiB/10043msec); 0 zone resets 00:23:45.714 slat (usec): min=19, max=16862, avg=568.39, stdev=1184.10 00:23:45.714 clat (usec): min=4683, max=96074, avg=36270.65, stdev=16464.58 00:23:45.714 lat (usec): min=4737, max=96130, avg=36839.04, stdev=16718.28 00:23:45.714 clat percentiles (usec): 00:23:45.714 | 1.00th=[17171], 5.00th=[17695], 10.00th=[18220], 20.00th=[18744], 00:23:45.714 | 30.00th=[19268], 40.00th=[22676], 50.00th=[36439], 60.00th=[47449], 00:23:45.714 | 70.00th=[53216], 80.00th=[54789], 90.00th=[56361], 95.00th=[57410], 00:23:45.714 | 99.00th=[59507], 99.50th=[60556], 99.90th=[74974], 99.95th=[82314], 00:23:45.714 | 99.99th=[95945] 00:23:45.714 bw ( KiB/s): min=285696, max=863232, per=12.82%, avg=444876.80, stdev=202562.39, samples=20 00:23:45.714 iops : min= 1116, max= 3372, avg=1737.80, stdev=791.26, samples=20 00:23:45.714 lat (msec) : 10=0.03%, 20=36.29%, 50=24.51%, 100=39.17% 00:23:45.714 cpu : usr=3.36%, sys=5.18%, ctx=3841, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,17441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job7: (groupid=0, jobs=1): err= 0: pid=365191: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1203, BW=301MiB/s (316MB/s)(3018MiB/10027msec); 0 zone resets 00:23:45.714 slat (usec): min=21, max=11429, avg=819.54, stdev=1502.93 00:23:45.714 clat (usec): min=15418, max=96567, avg=52320.56, stdev=14852.64 00:23:45.714 lat (usec): min=15997, max=97514, avg=53140.10, stdev=15047.33 00:23:45.714 clat percentiles (usec): 00:23:45.714 | 1.00th=[17171], 5.00th=[33162], 10.00th=[35914], 20.00th=[36963], 00:23:45.714 | 30.00th=[38536], 40.00th=[52691], 50.00th=[54789], 60.00th=[56361], 00:23:45.714 | 70.00th=[57410], 80.00th=[61080], 90.00th=[74974], 95.00th=[76022], 00:23:45.714 | 99.00th=[80217], 99.50th=[85459], 99.90th=[91751], 99.95th=[94897], 00:23:45.714 | 99.99th=[96994] 00:23:45.714 bw ( KiB/s): min=211968, max=536576, per=8.86%, avg=307430.40, stdev=89032.61, samples=20 00:23:45.714 iops : min= 828, max= 2096, avg=1200.90, stdev=347.78, samples=20 00:23:45.714 lat (msec) : 20=1.72%, 50=34.32%, 100=63.96% 00:23:45.714 cpu : usr=2.89%, sys=4.93%, ctx=2978, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,12072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job8: (groupid=0, jobs=1): err= 0: pid=365192: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1067, BW=267MiB/s (280MB/s)(2687MiB/10064msec); 0 zone resets 00:23:45.714 slat (usec): min=23, max=62220, avg=898.58, stdev=1795.20 00:23:45.714 clat (msec): min=7, max=151, avg=59.01, stdev=16.82 00:23:45.714 lat (msec): min=7, max=164, avg=59.91, stdev=17.05 00:23:45.714 clat percentiles (msec): 00:23:45.714 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 46], 00:23:45.714 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:23:45.714 | 70.00th=[ 64], 80.00th=[ 75], 90.00th=[ 79], 95.00th=[ 91], 00:23:45.714 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 142], 99.95th=[ 148], 00:23:45.714 | 99.99th=[ 153] 00:23:45.714 bw ( KiB/s): min=173568, max=434176, per=7.88%, avg=273536.00, stdev=66189.77, samples=20 00:23:45.714 iops : min= 678, max= 1696, avg=1068.50, stdev=258.55, samples=20 00:23:45.714 lat (msec) : 10=0.27%, 20=0.64%, 50=20.57%, 100=77.17%, 250=1.35% 00:23:45.714 cpu : usr=2.56%, sys=4.75%, ctx=2750, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,10748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job9: (groupid=0, jobs=1): err= 0: pid=365193: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1245, BW=311MiB/s (327MB/s)(3123MiB/10027msec); 0 zone resets 00:23:45.714 slat (usec): min=24, max=16010, avg=784.02, stdev=1509.99 00:23:45.714 clat (msec): min=10, max=101, avg=50.56, stdev=11.57 00:23:45.714 lat (msec): min=10, max=103, avg=51.35, stdev=11.75 00:23:45.714 clat percentiles (msec): 00:23:45.714 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:23:45.714 | 30.00th=[ 41], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 55], 00:23:45.714 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 67], 00:23:45.714 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 99], 99.95th=[ 100], 00:23:45.714 | 99.99th=[ 101] 00:23:45.714 bw ( KiB/s): min=195072, max=435712, per=9.17%, avg=318208.00, stdev=66944.75, samples=20 00:23:45.714 iops : min= 762, max= 1702, avg=1243.00, stdev=261.50, samples=20 00:23:45.714 lat (msec) : 20=0.10%, 50=34.44%, 100=65.44%, 250=0.02% 00:23:45.714 cpu : usr=2.96%, sys=4.91%, ctx=3104, majf=0, minf=1 00:23:45.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:45.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.714 issued rwts: total=0,12493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.714 job10: (groupid=0, jobs=1): err= 0: pid=365194: Sun Nov 17 04:40:23 2024 00:23:45.714 write: IOPS=1070, BW=268MiB/s (281MB/s)(2694MiB/10064msec); 0 zone resets 00:23:45.714 slat (usec): min=22, max=30360, avg=915.92, stdev=1925.29 00:23:45.714 clat (msec): min=10, max=153, avg=58.84, stdev=18.04 00:23:45.714 lat (msec): min=10, max=153, avg=59.75, stdev=18.34 00:23:45.714 clat percentiles (msec): 00:23:45.714 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 43], 00:23:45.715 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:23:45.715 | 70.00th=[ 58], 80.00th=[ 75], 90.00th=[ 90], 95.00th=[ 94], 00:23:45.715 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 134], 99.95th=[ 144], 00:23:45.715 | 99.99th=[ 144] 00:23:45.715 bw ( KiB/s): min=165376, max=431616, per=7.90%, avg=274227.20, stdev=77500.92, samples=20 00:23:45.715 iops : min= 646, max= 1686, avg=1071.20, stdev=302.74, samples=20 00:23:45.715 lat (msec) : 20=0.11%, 50=22.39%, 100=75.45%, 250=2.05% 00:23:45.715 cpu : usr=2.55%, sys=4.33%, ctx=2657, majf=0, minf=1 00:23:45.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:45.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:45.715 issued rwts: total=0,10775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:45.715 00:23:45.715 Run status group 0 (all jobs): 00:23:45.715 WRITE: bw=3389MiB/s (3554MB/s), 237MiB/s-434MiB/s (249MB/s-455MB/s), io=33.3GiB (35.8GB), run=10017-10067msec 00:23:45.715 00:23:45.715 Disk stats (read/write): 00:23:45.715 nvme0n1: ios=49/18810, merge=0/0, ticks=19/1215169, in_queue=1215188, util=96.77% 00:23:45.715 nvme10n1: ios=0/28105, merge=0/0, ticks=0/1220013, in_queue=1220013, util=96.92% 00:23:45.715 nvme1n1: ios=0/29314, merge=0/0, ticks=0/1221945, in_queue=1221945, util=97.25% 00:23:45.715 nvme2n1: ios=0/19721, merge=0/0, ticks=0/1215850, in_queue=1215850, util=97.42% 00:23:45.715 nvme3n1: ios=0/19246, merge=0/0, ticks=0/1214887, in_queue=1214887, util=97.50% 00:23:45.715 nvme4n1: ios=0/28105, merge=0/0, ticks=0/1218535, in_queue=1218535, util=97.88% 00:23:45.715 nvme5n1: ios=0/34467, merge=0/0, ticks=0/1220476, in_queue=1220476, util=98.05% 00:23:45.715 nvme6n1: ios=0/23605, merge=0/0, ticks=0/1219268, in_queue=1219268, util=98.19% 00:23:45.715 nvme7n1: ios=0/21201, merge=0/0, ticks=0/1214648, in_queue=1214648, util=98.64% 00:23:45.715 nvme8n1: ios=0/24442, merge=0/0, ticks=0/1218869, in_queue=1218869, util=98.87% 00:23:45.715 nvme9n1: ios=0/21257, merge=0/0, ticks=0/1214096, in_queue=1214096, util=99.02% 00:23:45.715 04:40:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:45.715 04:40:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:45.715 04:40:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.715 04:40:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:46.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.283 04:40:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:47.221 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.221 04:40:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.221 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:48.158 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:48.158 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:48.158 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:48.158 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:48.158 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:23:48.418 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:48.418 04:40:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:48.418 04:40:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:49.355 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.355 04:40:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:50.293 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.293 04:40:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:51.228 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:51.228 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:51.228 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.487 04:40:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:52.423 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.423 04:40:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:53.360 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.360 04:40:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:54.738 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.738 04:40:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:55.676 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.676 04:40:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:56.614 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:56.614 rmmod nvme_rdma 00:23:56.614 rmmod nvme_fabrics 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 357203 ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 357203 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 357203 ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 357203 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357203 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357203' 00:23:56.614 killing process with pid 357203 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 357203 00:23:56.614 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 357203 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:57.183 00:23:57.183 real 1m15.585s 00:23:57.183 user 4m53.683s 00:23:57.183 sys 0m19.971s 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.183 ************************************ 00:23:57.183 END TEST nvmf_multiconnection 00:23:57.183 ************************************ 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.183 ************************************ 00:23:57.183 START TEST nvmf_initiator_timeout 00:23:57.183 ************************************ 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:57.183 * Looking for test storage... 00:23:57.183 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:57.183 04:40:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.183 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.183 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.444 --rc genhtml_branch_coverage=1 00:23:57.444 --rc genhtml_function_coverage=1 00:23:57.444 --rc genhtml_legend=1 00:23:57.444 --rc geninfo_all_blocks=1 00:23:57.444 --rc geninfo_unexecuted_blocks=1 00:23:57.444 00:23:57.444 ' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.444 --rc genhtml_branch_coverage=1 00:23:57.444 --rc genhtml_function_coverage=1 00:23:57.444 --rc genhtml_legend=1 00:23:57.444 --rc geninfo_all_blocks=1 00:23:57.444 --rc geninfo_unexecuted_blocks=1 00:23:57.444 00:23:57.444 ' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.444 --rc genhtml_branch_coverage=1 00:23:57.444 --rc genhtml_function_coverage=1 00:23:57.444 --rc genhtml_legend=1 00:23:57.444 --rc geninfo_all_blocks=1 00:23:57.444 --rc geninfo_unexecuted_blocks=1 00:23:57.444 00:23:57.444 ' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.444 --rc genhtml_branch_coverage=1 00:23:57.444 --rc genhtml_function_coverage=1 00:23:57.444 --rc genhtml_legend=1 00:23:57.444 --rc geninfo_all_blocks=1 00:23:57.444 --rc geninfo_unexecuted_blocks=1 00:23:57.444 00:23:57.444 ' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.444 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.445 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.445 04:40:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.580 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:05.581 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:05.581 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:05.581 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:05.581 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:05.581 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.581 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:05.581 altname enp217s0f0np0 00:24:05.581 altname ens818f0np0 00:24:05.581 inet 192.168.100.8/24 scope global mlx_0_0 00:24:05.581 valid_lft forever preferred_lft forever 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:05.581 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:05.582 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.582 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:05.582 altname enp217s0f1np1 00:24:05.582 altname ens818f1np1 00:24:05.582 inet 192.168.100.9/24 scope global mlx_0_1 00:24:05.582 valid_lft forever preferred_lft forever 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:05.582 192.168.100.9' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:05.582 192.168.100.9' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:05.582 192.168.100.9' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=371928 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 371928 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 371928 ']' 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.582 [2024-11-17 04:40:43.327102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:05.582 [2024-11-17 04:40:43.327161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.582 [2024-11-17 04:40:43.418600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.582 [2024-11-17 04:40:43.440957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.582 [2024-11-17 04:40:43.440997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.582 [2024-11-17 04:40:43.441006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.582 [2024-11-17 04:40:43.441014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.582 [2024-11-17 04:40:43.441021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.582 [2024-11-17 04:40:43.442727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.582 [2024-11-17 04:40:43.442839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.582 [2024-11-17 04:40:43.442927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.582 [2024-11-17 04:40:43.442928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.582 Malloc0 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:05.582 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.583 Delay0 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.583 [2024-11-17 04:40:43.663968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe20c20/0xd13700) succeed. 00:24:05.583 [2024-11-17 04:40:43.673521] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe22260/0xd54da0) succeed. 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.583 [2024-11-17 04:40:43.814778] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.583 04:40:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:06.152 04:40:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:06.152 04:40:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:24:06.152 04:40:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:06.152 04:40:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:06.152 04:40:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=372518 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:08.060 04:40:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:08.060 [global] 00:24:08.060 thread=1 00:24:08.060 invalidate=1 00:24:08.060 rw=write 00:24:08.060 time_based=1 00:24:08.060 runtime=60 00:24:08.060 ioengine=libaio 00:24:08.060 direct=1 00:24:08.060 bs=4096 00:24:08.060 iodepth=1 00:24:08.060 norandommap=0 00:24:08.060 numjobs=1 00:24:08.060 00:24:08.060 verify_dump=1 00:24:08.060 verify_backlog=512 00:24:08.060 verify_state_save=0 00:24:08.060 do_verify=1 00:24:08.060 verify=crc32c-intel 00:24:08.060 [job0] 00:24:08.060 filename=/dev/nvme0n1 00:24:08.060 Could not set queue depth (nvme0n1) 00:24:08.626 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:08.626 fio-3.35 00:24:08.626 Starting 1 thread 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:11.171 true 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:11.171 true 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:11.171 true 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:11.171 true 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.171 04:40:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:14.464 true 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:14.464 true 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:14.464 true 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:14.464 true 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:14.464 04:40:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 372518 00:25:10.710 00:25:10.710 job0: (groupid=0, jobs=1): err= 0: pid=372777: Sun Nov 17 04:41:47 2024 00:25:10.710 read: IOPS=1256, BW=5026KiB/s (5147kB/s)(295MiB/60000msec) 00:25:10.710 slat (usec): min=5, max=11744, avg= 9.14, stdev=51.48 00:25:10.710 clat (usec): min=74, max=42546k, avg=668.05, stdev=154947.39 00:25:10.710 lat (usec): min=91, max=42546k, avg=677.19, stdev=154947.40 00:25:10.710 clat percentiles (usec): 00:25:10.710 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:25:10.710 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 105], 00:25:10.710 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 115], 00:25:10.710 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 133], 00:25:10.710 | 99.99th=[ 255] 00:25:10.710 write: IOPS=1262, BW=5052KiB/s (5173kB/s)(296MiB/60000msec); 0 zone resets 00:25:10.710 slat (usec): min=6, max=994, avg=11.43, stdev= 4.26 00:25:10.710 clat (usec): min=50, max=507, avg=101.47, stdev= 6.58 00:25:10.710 lat (usec): min=90, max=1044, avg=112.90, stdev= 7.96 00:25:10.710 clat percentiles (usec): 00:25:10.710 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:25:10.710 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:25:10.710 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 113], 00:25:10.710 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 131], 00:25:10.710 | 99.99th=[ 219] 00:25:10.710 bw ( KiB/s): min= 4096, max=19920, per=100.00%, avg=16852.11, stdev=2558.63, samples=35 00:25:10.710 iops : min= 1024, max= 4980, avg=4213.03, stdev=639.66, samples=35 00:25:10.710 lat (usec) : 100=34.93%, 250=65.06%, 500=0.01%, 750=0.01% 00:25:10.710 lat (msec) : >=2000=0.01% 00:25:10.710 cpu : usr=1.97%, sys=3.23%, ctx=151186, majf=0, minf=141 00:25:10.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.710 issued rwts: total=75397,75776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:10.710 00:25:10.710 Run status group 0 (all jobs): 00:25:10.710 READ: bw=5026KiB/s (5147kB/s), 5026KiB/s-5026KiB/s (5147kB/s-5147kB/s), io=295MiB (309MB), run=60000-60000msec 00:25:10.710 WRITE: bw=5052KiB/s (5173kB/s), 5052KiB/s-5052KiB/s (5173kB/s-5173kB/s), io=296MiB (310MB), run=60000-60000msec 00:25:10.710 00:25:10.710 Disk stats (read/write): 00:25:10.710 nvme0n1: ios=75364/75264, merge=0/0, ticks=7190/6987, in_queue=14177, util=99.55% 00:25:10.710 04:41:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:10.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:10.710 nvmf hotplug test: fio successful as expected 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:10.710 rmmod nvme_rdma 00:25:10.710 rmmod nvme_fabrics 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 371928 ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 371928 ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371928' 00:25:10.710 killing process with pid 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 371928 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:10.710 00:25:10.710 real 1m12.787s 00:25:10.710 user 4m31.861s 00:25:10.710 sys 0m8.150s 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.710 ************************************ 00:25:10.710 END TEST nvmf_initiator_timeout 00:25:10.710 ************************************ 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:10.710 04:41:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.711 ************************************ 00:25:10.711 START TEST nvmf_srq_overwhelm 00:25:10.711 ************************************ 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:10.711 * Looking for test storage... 00:25:10.711 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.711 --rc genhtml_branch_coverage=1 00:25:10.711 --rc genhtml_function_coverage=1 00:25:10.711 --rc genhtml_legend=1 00:25:10.711 --rc geninfo_all_blocks=1 00:25:10.711 --rc geninfo_unexecuted_blocks=1 00:25:10.711 00:25:10.711 ' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.711 --rc genhtml_branch_coverage=1 00:25:10.711 --rc genhtml_function_coverage=1 00:25:10.711 --rc genhtml_legend=1 00:25:10.711 --rc geninfo_all_blocks=1 00:25:10.711 --rc geninfo_unexecuted_blocks=1 00:25:10.711 00:25:10.711 ' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.711 --rc genhtml_branch_coverage=1 00:25:10.711 --rc genhtml_function_coverage=1 00:25:10.711 --rc genhtml_legend=1 00:25:10.711 --rc geninfo_all_blocks=1 00:25:10.711 --rc geninfo_unexecuted_blocks=1 00:25:10.711 00:25:10.711 ' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.711 --rc genhtml_branch_coverage=1 00:25:10.711 --rc genhtml_function_coverage=1 00:25:10.711 --rc genhtml_legend=1 00:25:10.711 --rc geninfo_all_blocks=1 00:25:10.711 --rc geninfo_unexecuted_blocks=1 00:25:10.711 00:25:10.711 ' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.711 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.712 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.712 04:41:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.712 04:41:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.712 04:41:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.712 04:41:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.712 04:41:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:17.290 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:17.290 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:17.290 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.290 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:17.290 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:17.291 04:41:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:17.291 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:17.291 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:17.291 altname enp217s0f0np0 00:25:17.291 altname ens818f0np0 00:25:17.291 inet 192.168.100.8/24 scope global mlx_0_0 00:25:17.291 valid_lft forever preferred_lft forever 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:17.291 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:17.291 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:17.291 altname enp217s0f1np1 00:25:17.291 altname ens818f1np1 00:25:17.291 inet 192.168.100.9/24 scope global mlx_0_1 00:25:17.291 valid_lft forever preferred_lft forever 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:17.291 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:17.551 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:17.551 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:17.551 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:17.551 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:17.551 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:17.552 192.168.100.9' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:17.552 192.168.100.9' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:17.552 192.168.100.9' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=386232 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 386232 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 386232 ']' 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.552 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.552 [2024-11-17 04:41:56.286191] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:17.552 [2024-11-17 04:41:56.286249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.552 [2024-11-17 04:41:56.380020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.812 [2024-11-17 04:41:56.402612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.812 [2024-11-17 04:41:56.402649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.812 [2024-11-17 04:41:56.402658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.812 [2024-11-17 04:41:56.402667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.812 [2024-11-17 04:41:56.402673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.812 [2024-11-17 04:41:56.404420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.812 [2024-11-17 04:41:56.404530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.812 [2024-11-17 04:41:56.404642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.812 [2024-11-17 04:41:56.404643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.812 [2024-11-17 04:41:56.566892] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x248dc50/0x2492100) succeed. 00:25:17.812 [2024-11-17 04:41:56.575960] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x248f290/0x24d37a0) succeed. 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.812 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 Malloc0 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 [2024-11-17 04:41:56.686700] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 04:41:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.011 Malloc1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.011 04:41:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:19.949 Malloc2 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.949 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.209 04:41:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:21.146 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:21.147 Malloc3 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.147 04:41:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:22.092 Malloc4 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.092 04:42:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 Malloc5 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 04:42:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:24.412 04:42:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:25:24.412 [global] 00:25:24.412 thread=1 00:25:24.412 invalidate=1 00:25:24.412 rw=read 00:25:24.412 time_based=1 00:25:24.412 runtime=10 00:25:24.412 ioengine=libaio 00:25:24.412 direct=1 00:25:24.412 bs=1048576 00:25:24.412 iodepth=128 00:25:24.412 norandommap=1 00:25:24.412 numjobs=13 00:25:24.412 00:25:24.412 [job0] 00:25:24.412 filename=/dev/nvme0n1 00:25:24.412 [job1] 00:25:24.412 filename=/dev/nvme1n1 00:25:24.412 [job2] 00:25:24.412 filename=/dev/nvme2n1 00:25:24.412 [job3] 00:25:24.412 filename=/dev/nvme3n1 00:25:24.412 [job4] 00:25:24.412 filename=/dev/nvme4n1 00:25:24.412 [job5] 00:25:24.412 filename=/dev/nvme5n1 00:25:24.412 Could not set queue depth (nvme0n1) 00:25:24.412 Could not set queue depth (nvme1n1) 00:25:24.412 Could not set queue depth (nvme2n1) 00:25:24.412 Could not set queue depth (nvme3n1) 00:25:24.412 Could not set queue depth (nvme4n1) 00:25:24.412 Could not set queue depth (nvme5n1) 00:25:24.671 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:24.671 ... 00:25:24.671 fio-3.35 00:25:24.671 Starting 78 threads 00:25:37.147 00:25:37.147 job0: (groupid=0, jobs=1): err= 0: pid=387697: Sun Nov 17 04:42:13 2024 00:25:37.147 read: IOPS=60, BW=60.0MiB/s (62.9MB/s)(608MiB/10130msec) 00:25:37.147 slat (usec): min=52, max=2094.6k, avg=16473.78, stdev=114723.66 00:25:37.147 clat (msec): min=109, max=5107, avg=1408.35, stdev=1091.25 00:25:37.147 lat (msec): min=162, max=6915, avg=1424.82, stdev=1111.42 00:25:37.147 clat percentiles (msec): 00:25:37.147 | 1.00th=[ 288], 5.00th=[ 634], 10.00th=[ 684], 20.00th=[ 768], 00:25:37.147 | 30.00th=[ 810], 40.00th=[ 869], 50.00th=[ 944], 60.00th=[ 995], 00:25:37.147 | 70.00th=[ 1351], 80.00th=[ 2198], 90.00th=[ 2635], 95.00th=[ 5000], 00:25:37.147 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5134], 99.95th=[ 5134], 00:25:37.147 | 99.99th=[ 5134] 00:25:37.147 bw ( KiB/s): min= 6144, max=178176, per=2.19%, avg=89321.91, stdev=68089.54, samples=11 00:25:37.147 iops : min= 6, max= 174, avg=87.09, stdev=66.42, samples=11 00:25:37.147 lat (msec) : 250=0.66%, 500=1.15%, 750=15.13%, 1000=44.24%, 2000=17.60% 00:25:37.147 lat (msec) : >=2000=21.22% 00:25:37.147 cpu : usr=0.04%, sys=1.43%, ctx=913, majf=0, minf=32769 00:25:37.147 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:37.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.147 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.147 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.147 job0: (groupid=0, jobs=1): err= 0: pid=387698: Sun Nov 17 04:42:13 2024 00:25:37.147 read: IOPS=200, BW=200MiB/s (210MB/s)(2024MiB/10115msec) 00:25:37.148 slat (usec): min=41, max=2036.8k, avg=4938.84, stdev=59182.89 00:25:37.148 clat (msec): min=112, max=5808, avg=610.10, stdev=1245.21 00:25:37.148 lat (msec): min=123, max=5809, avg=615.04, stdev=1250.80 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 124], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 125], 00:25:37.148 | 30.00th=[ 126], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 159], 00:25:37.148 | 70.00th=[ 262], 80.00th=[ 718], 90.00th=[ 1062], 95.00th=[ 4597], 00:25:37.148 | 99.00th=[ 5738], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:37.148 | 99.99th=[ 5805] 00:25:37.148 bw ( KiB/s): min= 6144, max=1046528, per=6.82%, avg=277500.43, stdev=379263.23, samples=14 00:25:37.148 iops : min= 6, max= 1022, avg=270.93, stdev=370.42, samples=14 00:25:37.148 lat (msec) : 250=68.18%, 500=9.58%, 750=2.52%, 1000=5.88%, 2000=7.21% 00:25:37.148 lat (msec) : >=2000=6.62% 00:25:37.148 cpu : usr=0.05%, sys=2.33%, ctx=2392, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.148 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387699: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=12, BW=13.0MiB/s (13.6MB/s)(131MiB/10090msec) 00:25:37.148 slat (usec): min=388, max=2064.9k, avg=76391.07, stdev=327224.82 00:25:37.148 clat (msec): min=81, max=10082, avg=3767.78, stdev=3301.59 00:25:37.148 lat (msec): min=91, max=10084, avg=3844.17, stdev=3331.28 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 92], 5.00th=[ 230], 10.00th=[ 393], 20.00th=[ 785], 00:25:37.148 | 30.00th=[ 1167], 40.00th=[ 1318], 50.00th=[ 1770], 60.00th=[ 3910], 00:25:37.148 | 70.00th=[ 5940], 80.00th=[ 6074], 90.00th=[10000], 95.00th=[10134], 00:25:37.148 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.148 | 99.99th=[10134] 00:25:37.148 bw ( KiB/s): min= 8126, max= 8126, per=0.20%, avg=8126.00, stdev= 0.00, samples=1 00:25:37.148 iops : min= 7, max= 7, avg= 7.00, stdev= 0.00, samples=1 00:25:37.148 lat (msec) : 100=1.53%, 250=3.82%, 500=7.63%, 750=6.11%, 1000=5.34% 00:25:37.148 lat (msec) : 2000=25.95%, >=2000=49.62% 00:25:37.148 cpu : usr=0.00%, sys=0.71%, ctx=377, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.1%, 16=12.2%, 32=24.4%, >=64=51.9% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=80.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=20.0% 00:25:37.148 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387700: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=44, BW=44.3MiB/s (46.4MB/s)(446MiB/10075msec) 00:25:37.148 slat (usec): min=38, max=2031.0k, avg=22436.54, stdev=146632.12 00:25:37.148 clat (msec): min=65, max=9618, avg=2272.87, stdev=2184.44 00:25:37.148 lat (msec): min=82, max=9634, avg=2295.31, stdev=2205.52 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 87], 5.00th=[ 180], 10.00th=[ 388], 20.00th=[ 393], 00:25:37.148 | 30.00th=[ 397], 40.00th=[ 409], 50.00th=[ 1284], 60.00th=[ 2635], 00:25:37.148 | 70.00th=[ 3373], 80.00th=[ 4530], 90.00th=[ 5738], 95.00th=[ 6342], 00:25:37.148 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 9597], 99.95th=[ 9597], 00:25:37.148 | 99.99th=[ 9597] 00:25:37.148 bw ( KiB/s): min= 2048, max=274432, per=1.46%, avg=59392.00, stdev=74924.34, samples=11 00:25:37.148 iops : min= 2, max= 268, avg=58.00, stdev=73.17, samples=11 00:25:37.148 lat (msec) : 100=2.24%, 250=3.81%, 500=38.12%, 750=0.90%, 1000=3.14% 00:25:37.148 lat (msec) : 2000=7.62%, >=2000=44.17% 00:25:37.148 cpu : usr=0.01%, sys=1.27%, ctx=746, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.148 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387701: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=25, BW=25.4MiB/s (26.6MB/s)(256MiB/10093msec) 00:25:37.148 slat (usec): min=39, max=2030.4k, avg=39092.25, stdev=210976.77 00:25:37.148 clat (msec): min=83, max=5051, avg=2983.72, stdev=1736.13 00:25:37.148 lat (msec): min=114, max=5105, avg=3022.82, stdev=1733.29 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 117], 5.00th=[ 203], 10.00th=[ 558], 20.00th=[ 1150], 00:25:37.148 | 30.00th=[ 1351], 40.00th=[ 1787], 50.00th=[ 3809], 60.00th=[ 4178], 00:25:37.148 | 70.00th=[ 4530], 80.00th=[ 4665], 90.00th=[ 4866], 95.00th=[ 4933], 00:25:37.148 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:37.148 | 99.99th=[ 5067] 00:25:37.148 bw ( KiB/s): min=12288, max=71536, per=0.92%, avg=37653.71, stdev=21127.78, samples=7 00:25:37.148 iops : min= 12, max= 69, avg=36.57, stdev=20.31, samples=7 00:25:37.148 lat (msec) : 100=0.39%, 250=4.69%, 500=2.34%, 750=7.81%, 1000=3.52% 00:25:37.148 lat (msec) : 2000=22.66%, >=2000=58.59% 00:25:37.148 cpu : usr=0.00%, sys=1.10%, ctx=606, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:37.148 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387702: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=55, BW=55.0MiB/s (57.7MB/s)(554MiB/10072msec) 00:25:37.148 slat (usec): min=37, max=1812.2k, avg=18095.24, stdev=109868.93 00:25:37.148 clat (msec): min=44, max=4878, avg=1692.79, stdev=1077.80 00:25:37.148 lat (msec): min=84, max=4966, avg=1710.88, stdev=1085.77 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 169], 5.00th=[ 760], 10.00th=[ 877], 20.00th=[ 885], 00:25:37.148 | 30.00th=[ 894], 40.00th=[ 969], 50.00th=[ 1234], 60.00th=[ 1536], 00:25:37.148 | 70.00th=[ 1754], 80.00th=[ 2937], 90.00th=[ 3675], 95.00th=[ 3809], 00:25:37.148 | 99.00th=[ 3977], 99.50th=[ 4010], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:37.148 | 99.99th=[ 4866] 00:25:37.148 bw ( KiB/s): min=16384, max=157696, per=2.14%, avg=87199.60, stdev=53887.74, samples=10 00:25:37.148 iops : min= 16, max= 154, avg=85.10, stdev=52.69, samples=10 00:25:37.148 lat (msec) : 50=0.18%, 100=0.36%, 250=0.72%, 500=1.26%, 750=2.17% 00:25:37.148 lat (msec) : 1000=38.09%, 2000=33.94%, >=2000=23.29% 00:25:37.148 cpu : usr=0.04%, sys=1.17%, ctx=850, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.148 issued rwts: total=554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387703: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(225MiB/10045msec) 00:25:37.148 slat (usec): min=37, max=2104.0k, avg=44448.31, stdev=263934.41 00:25:37.148 clat (msec): min=42, max=9059, avg=1790.90, stdev=2166.42 00:25:37.148 lat (msec): min=51, max=9093, avg=1835.34, stdev=2218.23 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 53], 5.00th=[ 68], 10.00th=[ 368], 20.00th=[ 600], 00:25:37.148 | 30.00th=[ 852], 40.00th=[ 919], 50.00th=[ 1099], 60.00th=[ 1150], 00:25:37.148 | 70.00th=[ 1234], 80.00th=[ 1385], 90.00th=[ 5134], 95.00th=[ 7215], 00:25:37.148 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:37.148 | 99.99th=[ 9060] 00:25:37.148 bw ( KiB/s): min=86016, max=114688, per=2.47%, avg=100352.00, stdev=20274.17, samples=2 00:25:37.148 iops : min= 84, max= 112, avg=98.00, stdev=19.80, samples=2 00:25:37.148 lat (msec) : 50=0.44%, 100=5.78%, 250=1.78%, 500=10.22%, 750=8.89% 00:25:37.148 lat (msec) : 1000=17.78%, 2000=36.00%, >=2000=19.11% 00:25:37.148 cpu : usr=0.00%, sys=0.93%, ctx=388, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:25:37.148 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387704: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=23, BW=23.7MiB/s (24.9MB/s)(240MiB/10122msec) 00:25:37.148 slat (usec): min=35, max=2133.1k, avg=41693.35, stdev=224733.73 00:25:37.148 clat (msec): min=113, max=8030, avg=2629.02, stdev=2347.42 00:25:37.148 lat (msec): min=172, max=8036, avg=2670.72, stdev=2364.89 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 236], 5.00th=[ 760], 10.00th=[ 1011], 20.00th=[ 1250], 00:25:37.148 | 30.00th=[ 1519], 40.00th=[ 1737], 50.00th=[ 1838], 60.00th=[ 1938], 00:25:37.148 | 70.00th=[ 2056], 80.00th=[ 2165], 90.00th=[ 7953], 95.00th=[ 8020], 00:25:37.148 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:25:37.148 | 99.99th=[ 8020] 00:25:37.148 bw ( KiB/s): min=14336, max=161792, per=1.14%, avg=46284.80, stdev=64611.07, samples=5 00:25:37.148 iops : min= 14, max= 158, avg=45.20, stdev=63.10, samples=5 00:25:37.148 lat (msec) : 250=1.25%, 500=2.08%, 750=1.25%, 1000=4.58%, 2000=55.83% 00:25:37.148 lat (msec) : >=2000=35.00% 00:25:37.148 cpu : usr=0.01%, sys=1.06%, ctx=660, majf=0, minf=32769 00:25:37.148 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:25:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.148 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:25:37.148 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.148 job0: (groupid=0, jobs=1): err= 0: pid=387705: Sun Nov 17 04:42:13 2024 00:25:37.148 read: IOPS=26, BW=26.6MiB/s (27.9MB/s)(269MiB/10108msec) 00:25:37.148 slat (usec): min=394, max=2035.6k, avg=37293.51, stdev=194312.87 00:25:37.148 clat (msec): min=74, max=6885, avg=4028.73, stdev=2195.22 00:25:37.148 lat (msec): min=146, max=6889, avg=4066.02, stdev=2182.73 00:25:37.148 clat percentiles (msec): 00:25:37.148 | 1.00th=[ 167], 5.00th=[ 667], 10.00th=[ 1217], 20.00th=[ 1754], 00:25:37.148 | 30.00th=[ 1838], 40.00th=[ 3171], 50.00th=[ 3373], 60.00th=[ 5805], 00:25:37.148 | 70.00th=[ 6007], 80.00th=[ 6342], 90.00th=[ 6544], 95.00th=[ 6678], 00:25:37.148 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:25:37.148 | 99.99th=[ 6879] 00:25:37.148 bw ( KiB/s): min= 2048, max=81920, per=0.79%, avg=32085.33, stdev=29678.34, samples=9 00:25:37.148 iops : min= 2, max= 80, avg=31.33, stdev=28.98, samples=9 00:25:37.149 lat (msec) : 100=0.37%, 250=1.49%, 500=1.86%, 750=1.86%, 1000=2.23% 00:25:37.149 lat (msec) : 2000=25.65%, >=2000=66.54% 00:25:37.149 cpu : usr=0.02%, sys=1.18%, ctx=622, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.6% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:37.149 issued rwts: total=269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job0: (groupid=0, jobs=1): err= 0: pid=387706: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=26, BW=26.6MiB/s (27.9MB/s)(269MiB/10109msec) 00:25:37.149 slat (usec): min=71, max=2119.6k, avg=37177.76, stdev=242185.32 00:25:37.149 clat (msec): min=105, max=8982, avg=1872.67, stdev=2503.30 00:25:37.149 lat (msec): min=114, max=8993, avg=1909.85, stdev=2539.10 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 125], 5.00th=[ 203], 10.00th=[ 309], 20.00th=[ 514], 00:25:37.149 | 30.00th=[ 735], 40.00th=[ 927], 50.00th=[ 986], 60.00th=[ 995], 00:25:37.149 | 70.00th=[ 1003], 80.00th=[ 1083], 90.00th=[ 7148], 95.00th=[ 8926], 00:25:37.149 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:37.149 | 99.99th=[ 8926] 00:25:37.149 bw ( KiB/s): min=26624, max=135168, per=2.38%, avg=96938.67, stdev=60971.73, samples=3 00:25:37.149 iops : min= 26, max= 132, avg=94.67, stdev=59.54, samples=3 00:25:37.149 lat (msec) : 250=7.06%, 500=12.27%, 750=11.90%, 1000=37.92%, 2000=11.52% 00:25:37.149 lat (msec) : >=2000=19.33% 00:25:37.149 cpu : usr=0.06%, sys=1.39%, ctx=224, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.6% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:37.149 issued rwts: total=269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job0: (groupid=0, jobs=1): err= 0: pid=387707: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=31, BW=31.5MiB/s (33.1MB/s)(318MiB/10086msec) 00:25:37.149 slat (usec): min=492, max=2130.8k, avg=31570.85, stdev=188639.04 00:25:37.149 clat (msec): min=44, max=8085, avg=3775.22, stdev=3138.56 00:25:37.149 lat (msec): min=92, max=8087, avg=3806.80, stdev=3139.72 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 113], 5.00th=[ 542], 10.00th=[ 1045], 20.00th=[ 1183], 00:25:37.149 | 30.00th=[ 1234], 40.00th=[ 1318], 50.00th=[ 1418], 60.00th=[ 3876], 00:25:37.149 | 70.00th=[ 7215], 80.00th=[ 7886], 90.00th=[ 7953], 95.00th=[ 8020], 00:25:37.149 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:25:37.149 | 99.99th=[ 8087] 00:25:37.149 bw ( KiB/s): min= 6144, max=131072, per=1.06%, avg=43188.33, stdev=40764.10, samples=9 00:25:37.149 iops : min= 6, max= 128, avg=42.11, stdev=39.83, samples=9 00:25:37.149 lat (msec) : 50=0.31%, 100=0.63%, 250=1.57%, 500=2.20%, 750=1.26% 00:25:37.149 lat (msec) : 1000=1.89%, 2000=49.37%, >=2000=42.77% 00:25:37.149 cpu : usr=0.03%, sys=1.01%, ctx=823, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.1%, >=64=80.2% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:37.149 issued rwts: total=318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job0: (groupid=0, jobs=1): err= 0: pid=387708: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=15, BW=15.5MiB/s (16.3MB/s)(158MiB/10162msec) 00:25:37.149 slat (usec): min=997, max=2126.7k, avg=63619.45, stdev=275755.39 00:25:37.149 clat (msec): min=109, max=9898, avg=4101.81, stdev=3050.62 00:25:37.149 lat (msec): min=185, max=9907, avg=4165.43, stdev=3071.36 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 186], 5.00th=[ 321], 10.00th=[ 609], 20.00th=[ 1703], 00:25:37.149 | 30.00th=[ 2702], 40.00th=[ 2903], 50.00th=[ 3138], 60.00th=[ 3406], 00:25:37.149 | 70.00th=[ 3675], 80.00th=[ 8658], 90.00th=[ 9597], 95.00th=[ 9731], 00:25:37.149 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:25:37.149 | 99.99th=[ 9866] 00:25:37.149 bw ( KiB/s): min=12288, max=32768, per=0.50%, avg=20480.00, stdev=10837.00, samples=3 00:25:37.149 iops : min= 12, max= 32, avg=20.00, stdev=10.58, samples=3 00:25:37.149 lat (msec) : 250=1.90%, 500=6.33%, 750=3.16%, 1000=1.90%, 2000=7.59% 00:25:37.149 lat (msec) : >=2000=79.11% 00:25:37.149 cpu : usr=0.02%, sys=0.91%, ctx=552, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.1%, 32=20.3%, >=64=60.1% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=96.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.1% 00:25:37.149 issued rwts: total=158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job0: (groupid=0, jobs=1): err= 0: pid=387709: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=58, BW=58.6MiB/s (61.4MB/s)(593MiB/10127msec) 00:25:37.149 slat (usec): min=42, max=2051.7k, avg=16884.73, stdev=145020.08 00:25:37.149 clat (msec): min=111, max=8098, avg=2068.33, stdev=2784.37 00:25:37.149 lat (msec): min=134, max=8100, avg=2085.21, stdev=2794.17 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 203], 5.00th=[ 380], 10.00th=[ 388], 20.00th=[ 393], 00:25:37.149 | 30.00th=[ 414], 40.00th=[ 468], 50.00th=[ 567], 60.00th=[ 651], 00:25:37.149 | 70.00th=[ 1099], 80.00th=[ 5873], 90.00th=[ 7349], 95.00th=[ 7819], 00:25:37.149 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:25:37.149 | 99.99th=[ 8087] 00:25:37.149 bw ( KiB/s): min= 4096, max=331776, per=2.34%, avg=95220.90, stdev=114286.26, samples=10 00:25:37.149 iops : min= 4, max= 324, avg=92.90, stdev=111.64, samples=10 00:25:37.149 lat (msec) : 250=1.85%, 500=42.33%, 750=19.90%, 1000=3.54%, 2000=8.26% 00:25:37.149 lat (msec) : >=2000=24.11% 00:25:37.149 cpu : usr=0.00%, sys=1.61%, ctx=738, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.149 issued rwts: total=593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job1: (groupid=0, jobs=1): err= 0: pid=387710: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(375MiB/10081msec) 00:25:37.149 slat (usec): min=46, max=1712.7k, avg=26727.73, stdev=132847.38 00:25:37.149 clat (msec): min=56, max=6633, avg=2846.54, stdev=1548.91 00:25:37.149 lat (msec): min=115, max=6657, avg=2873.26, stdev=1551.22 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 146], 5.00th=[ 506], 10.00th=[ 693], 20.00th=[ 1418], 00:25:37.149 | 30.00th=[ 2089], 40.00th=[ 2400], 50.00th=[ 2500], 60.00th=[ 2937], 00:25:37.149 | 70.00th=[ 4245], 80.00th=[ 4329], 90.00th=[ 5201], 95.00th=[ 5470], 00:25:37.149 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:37.149 | 99.99th=[ 6611] 00:25:37.149 bw ( KiB/s): min=20480, max=65536, per=1.04%, avg=42154.67, stdev=14147.47, samples=12 00:25:37.149 iops : min= 20, max= 64, avg=41.17, stdev=13.82, samples=12 00:25:37.149 lat (msec) : 100=0.27%, 250=2.13%, 500=2.40%, 750=6.67%, 1000=3.20% 00:25:37.149 lat (msec) : 2000=13.60%, >=2000=71.73% 00:25:37.149 cpu : usr=0.00%, sys=1.18%, ctx=824, majf=0, minf=32769 00:25:37.149 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.149 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job1: (groupid=0, jobs=1): err= 0: pid=387711: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=8, BW=9109KiB/s (9328kB/s)(90.0MiB/10117msec) 00:25:37.149 slat (usec): min=1487, max=2133.1k, avg=111110.39, stdev=414632.27 00:25:37.149 clat (msec): min=116, max=10113, avg=2939.11, stdev=3596.90 00:25:37.149 lat (msec): min=136, max=10116, avg=3050.22, stdev=3662.58 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 116], 5.00th=[ 194], 10.00th=[ 266], 20.00th=[ 510], 00:25:37.149 | 30.00th=[ 827], 40.00th=[ 1070], 50.00th=[ 1368], 60.00th=[ 1569], 00:25:37.149 | 70.00th=[ 1687], 80.00th=[ 6007], 90.00th=[10000], 95.00th=[10134], 00:25:37.149 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.149 | 99.99th=[10134] 00:25:37.149 lat (msec) : 250=8.89%, 500=8.89%, 750=10.00%, 1000=8.89%, 2000=38.89% 00:25:37.149 lat (msec) : >=2000=24.44% 00:25:37.149 cpu : usr=0.01%, sys=0.56%, ctx=288, majf=0, minf=23041 00:25:37.149 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.9%, 16=17.8%, 32=35.6%, >=64=30.0% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.149 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job1: (groupid=0, jobs=1): err= 0: pid=387712: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=11, BW=11.2MiB/s (11.8MB/s)(114MiB/10149msec) 00:25:37.149 slat (usec): min=1073, max=2210.0k, avg=88039.14, stdev=328780.01 00:25:37.149 clat (msec): min=111, max=10145, avg=4543.47, stdev=3824.92 00:25:37.149 lat (msec): min=156, max=10148, avg=4631.51, stdev=3837.49 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 157], 5.00th=[ 380], 10.00th=[ 642], 20.00th=[ 1083], 00:25:37.149 | 30.00th=[ 1552], 40.00th=[ 2601], 50.00th=[ 3071], 60.00th=[ 3540], 00:25:37.149 | 70.00th=[ 8087], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:25:37.149 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.149 | 99.99th=[10134] 00:25:37.149 lat (msec) : 250=1.75%, 500=6.14%, 750=5.26%, 1000=3.51%, 2000=17.54% 00:25:37.149 lat (msec) : >=2000=65.79% 00:25:37.149 cpu : usr=0.00%, sys=0.86%, ctx=511, majf=0, minf=29185 00:25:37.149 IO depths : 1=0.9%, 2=1.8%, 4=3.5%, 8=7.0%, 16=14.0%, 32=28.1%, >=64=44.7% 00:25:37.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.149 issued rwts: total=114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.149 job1: (groupid=0, jobs=1): err= 0: pid=387713: Sun Nov 17 04:42:13 2024 00:25:37.149 read: IOPS=11, BW=11.6MiB/s (12.2MB/s)(118MiB/10129msec) 00:25:37.149 slat (usec): min=610, max=2077.3k, avg=84883.25, stdev=346320.88 00:25:37.149 clat (msec): min=111, max=10126, avg=4346.17, stdev=4076.59 00:25:37.149 lat (msec): min=154, max=10128, avg=4431.06, stdev=4091.93 00:25:37.149 clat percentiles (msec): 00:25:37.149 | 1.00th=[ 155], 5.00th=[ 218], 10.00th=[ 376], 20.00th=[ 693], 00:25:37.149 | 30.00th=[ 995], 40.00th=[ 1301], 50.00th=[ 1620], 60.00th=[ 3809], 00:25:37.149 | 70.00th=[ 8154], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:25:37.149 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.149 | 99.99th=[10134] 00:25:37.149 lat (msec) : 250=5.93%, 500=8.47%, 750=6.78%, 1000=9.32%, 2000=21.19% 00:25:37.149 lat (msec) : >=2000=48.31% 00:25:37.150 cpu : usr=0.00%, sys=0.86%, ctx=404, majf=0, minf=30209 00:25:37.150 IO depths : 1=0.8%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.6%, 32=27.1%, >=64=46.6% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.150 issued rwts: total=118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387714: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=53, BW=53.7MiB/s (56.3MB/s)(542MiB/10092msec) 00:25:37.150 slat (usec): min=41, max=2083.5k, avg=18508.52, stdev=118524.11 00:25:37.150 clat (msec): min=56, max=5102, avg=1356.50, stdev=1033.00 00:25:37.150 lat (msec): min=112, max=5105, avg=1375.01, stdev=1047.43 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 205], 5.00th=[ 443], 10.00th=[ 472], 20.00th=[ 527], 00:25:37.150 | 30.00th=[ 600], 40.00th=[ 625], 50.00th=[ 835], 60.00th=[ 1318], 00:25:37.150 | 70.00th=[ 1871], 80.00th=[ 2400], 90.00th=[ 2802], 95.00th=[ 2937], 00:25:37.150 | 99.00th=[ 5000], 99.50th=[ 5067], 99.90th=[ 5134], 99.95th=[ 5134], 00:25:37.150 | 99.99th=[ 5134] 00:25:37.150 bw ( KiB/s): min=34816, max=212992, per=2.08%, avg=84787.20, stdev=68615.30, samples=10 00:25:37.150 iops : min= 34, max= 208, avg=82.80, stdev=67.01, samples=10 00:25:37.150 lat (msec) : 100=0.18%, 250=1.66%, 500=12.92%, 750=32.66%, 1000=7.38% 00:25:37.150 lat (msec) : 2000=17.34%, >=2000=27.86% 00:25:37.150 cpu : usr=0.02%, sys=1.11%, ctx=1044, majf=0, minf=32769 00:25:37.150 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.150 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387715: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=9, BW=9401KiB/s (9627kB/s)(93.0MiB/10130msec) 00:25:37.150 slat (usec): min=974, max=3895.0k, avg=107669.49, stdev=500654.99 00:25:37.150 clat (msec): min=116, max=10128, avg=2317.06, stdev=3046.85 00:25:37.150 lat (msec): min=136, max=10129, avg=2424.73, stdev=3143.63 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 116], 5.00th=[ 224], 10.00th=[ 342], 20.00th=[ 510], 00:25:37.150 | 30.00th=[ 726], 40.00th=[ 927], 50.00th=[ 1133], 60.00th=[ 1334], 00:25:37.150 | 70.00th=[ 1536], 80.00th=[ 3775], 90.00th=[ 9866], 95.00th=[10134], 00:25:37.150 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.150 | 99.99th=[10134] 00:25:37.150 lat (msec) : 250=6.45%, 500=11.83%, 750=12.90%, 1000=12.90%, 2000=34.41% 00:25:37.150 lat (msec) : >=2000=21.51% 00:25:37.150 cpu : usr=0.00%, sys=0.67%, ctx=275, majf=0, minf=23809 00:25:37.150 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.6%, 16=17.2%, 32=34.4%, >=64=32.3% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.150 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387716: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=7, BW=7691KiB/s (7875kB/s)(76.0MiB/10119msec) 00:25:37.150 slat (usec): min=602, max=2161.6k, avg=131719.11, stdev=450375.83 00:25:37.150 clat (msec): min=107, max=10113, avg=3499.87, stdev=3983.59 00:25:37.150 lat (msec): min=126, max=10118, avg=3631.59, stdev=4035.10 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 108], 5.00th=[ 155], 10.00th=[ 232], 20.00th=[ 489], 00:25:37.150 | 30.00th=[ 835], 40.00th=[ 1099], 50.00th=[ 1418], 60.00th=[ 1536], 00:25:37.150 | 70.00th=[ 5940], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10134], 00:25:37.150 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.150 | 99.99th=[10134] 00:25:37.150 lat (msec) : 250=13.16%, 500=7.89%, 750=7.89%, 1000=5.26%, 2000=34.21% 00:25:37.150 lat (msec) : >=2000=31.58% 00:25:37.150 cpu : usr=0.00%, sys=0.54%, ctx=289, majf=0, minf=19457 00:25:37.150 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.150 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387717: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=22, BW=22.7MiB/s (23.8MB/s)(230MiB/10133msec) 00:25:37.150 slat (usec): min=673, max=2102.4k, avg=43536.00, stdev=203021.32 00:25:37.150 clat (msec): min=118, max=8289, avg=3450.84, stdev=2463.01 00:25:37.150 lat (msec): min=167, max=9271, avg=3494.38, stdev=2484.24 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 194], 5.00th=[ 414], 10.00th=[ 600], 20.00th=[ 1070], 00:25:37.150 | 30.00th=[ 1770], 40.00th=[ 2333], 50.00th=[ 3037], 60.00th=[ 3239], 00:25:37.150 | 70.00th=[ 4597], 80.00th=[ 4665], 90.00th=[ 7819], 95.00th=[ 8087], 00:25:37.150 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:25:37.150 | 99.99th=[ 8288] 00:25:37.150 bw ( KiB/s): min= 8192, max=49152, per=0.86%, avg=34805.67, stdev=16129.14, samples=6 00:25:37.150 iops : min= 8, max= 48, avg=33.83, stdev=15.80, samples=6 00:25:37.150 lat (msec) : 250=1.74%, 500=5.22%, 750=5.65%, 1000=5.22%, 2000=15.65% 00:25:37.150 lat (msec) : >=2000=66.52% 00:25:37.150 cpu : usr=0.01%, sys=1.07%, ctx=716, majf=0, minf=32769 00:25:37.150 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:25:37.150 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387718: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=18, BW=18.9MiB/s (19.8MB/s)(191MiB/10114msec) 00:25:37.150 slat (usec): min=43, max=2098.3k, avg=52368.91, stdev=245663.75 00:25:37.150 clat (msec): min=110, max=8106, avg=2650.71, stdev=1924.85 00:25:37.150 lat (msec): min=135, max=8112, avg=2703.08, stdev=1956.87 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 136], 5.00th=[ 464], 10.00th=[ 1150], 20.00th=[ 1770], 00:25:37.150 | 30.00th=[ 1989], 40.00th=[ 2072], 50.00th=[ 2165], 60.00th=[ 2333], 00:25:37.150 | 70.00th=[ 2534], 80.00th=[ 2668], 90.00th=[ 6275], 95.00th=[ 8020], 00:25:37.150 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:25:37.150 | 99.99th=[ 8087] 00:25:37.150 bw ( KiB/s): min=16384, max=67584, per=0.80%, avg=32768.00, stdev=23470.23, samples=4 00:25:37.150 iops : min= 16, max= 66, avg=32.00, stdev=22.92, samples=4 00:25:37.150 lat (msec) : 250=1.57%, 500=3.66%, 750=2.09%, 1000=1.57%, 2000=21.99% 00:25:37.150 lat (msec) : >=2000=69.11% 00:25:37.150 cpu : usr=0.01%, sys=1.05%, ctx=657, majf=0, minf=32769 00:25:37.150 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=67.0% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:25:37.150 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387719: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(125MiB/10150msec) 00:25:37.150 slat (usec): min=1590, max=2116.3k, avg=80355.74, stdev=304556.87 00:25:37.150 clat (msec): min=104, max=10147, avg=3202.91, stdev=2586.12 00:25:37.150 lat (msec): min=162, max=10149, avg=3283.27, stdev=2644.46 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 163], 5.00th=[ 435], 10.00th=[ 1099], 20.00th=[ 1754], 00:25:37.150 | 30.00th=[ 1972], 40.00th=[ 2232], 50.00th=[ 2500], 60.00th=[ 2668], 00:25:37.150 | 70.00th=[ 3138], 80.00th=[ 3540], 90.00th=[ 8154], 95.00th=[10134], 00:25:37.150 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:37.150 | 99.99th=[10134] 00:25:37.150 lat (msec) : 250=2.40%, 500=3.20%, 750=2.40%, 2000=22.40%, >=2000=69.60% 00:25:37.150 cpu : usr=0.00%, sys=0.91%, ctx=643, majf=0, minf=32001 00:25:37.150 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.150 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387720: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=14, BW=14.2MiB/s (14.9MB/s)(144MiB/10137msec) 00:25:37.150 slat (usec): min=629, max=2119.6k, avg=69663.93, stdev=286797.64 00:25:37.150 clat (msec): min=103, max=9516, avg=2890.88, stdev=2188.15 00:25:37.150 lat (msec): min=162, max=9543, avg=2960.55, stdev=2251.68 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 163], 5.00th=[ 460], 10.00th=[ 718], 20.00th=[ 1435], 00:25:37.150 | 30.00th=[ 1821], 40.00th=[ 2056], 50.00th=[ 2467], 60.00th=[ 2836], 00:25:37.150 | 70.00th=[ 3171], 80.00th=[ 3540], 90.00th=[ 5738], 95.00th=[ 9329], 00:25:37.150 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:25:37.150 | 99.99th=[ 9463] 00:25:37.150 bw ( KiB/s): min=12288, max=20439, per=0.40%, avg=16363.50, stdev=5763.63, samples=2 00:25:37.150 iops : min= 12, max= 19, avg=15.50, stdev= 4.95, samples=2 00:25:37.150 lat (msec) : 250=2.08%, 500=3.47%, 750=5.56%, 1000=2.78%, 2000=22.92% 00:25:37.150 lat (msec) : >=2000=63.19% 00:25:37.150 cpu : usr=0.00%, sys=0.77%, ctx=595, majf=0, minf=32769 00:25:37.150 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.1%, 32=22.2%, >=64=56.2% 00:25:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.150 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.6% 00:25:37.150 issued rwts: total=144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.150 job1: (groupid=0, jobs=1): err= 0: pid=387721: Sun Nov 17 04:42:13 2024 00:25:37.150 read: IOPS=8, BW=9185KiB/s (9405kB/s)(90.0MiB/10034msec) 00:25:37.150 slat (usec): min=387, max=2086.2k, avg=111260.95, stdev=411112.23 00:25:37.150 clat (msec): min=20, max=10018, avg=2074.13, stdev=2896.67 00:25:37.150 lat (msec): min=43, max=10033, avg=2185.39, stdev=3007.10 00:25:37.150 clat percentiles (msec): 00:25:37.150 | 1.00th=[ 21], 5.00th=[ 47], 10.00th=[ 70], 20.00th=[ 159], 00:25:37.150 | 30.00th=[ 514], 40.00th=[ 701], 50.00th=[ 877], 60.00th=[ 1217], 00:25:37.150 | 70.00th=[ 1452], 80.00th=[ 1720], 90.00th=[ 8154], 95.00th=[10000], 00:25:37.150 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:25:37.150 | 99.99th=[10000] 00:25:37.150 lat (msec) : 50=6.67%, 100=10.00%, 250=5.56%, 500=6.67%, 750=15.56% 00:25:37.150 lat (msec) : 1000=11.11%, 2000=24.44%, >=2000=20.00% 00:25:37.151 cpu : usr=0.01%, sys=0.45%, ctx=316, majf=0, minf=23041 00:25:37.151 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.9%, 16=17.8%, 32=35.6%, >=64=30.0% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.151 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job1: (groupid=0, jobs=1): err= 0: pid=387722: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=12, BW=12.0MiB/s (12.6MB/s)(121MiB/10061msec) 00:25:37.151 slat (usec): min=527, max=2092.6k, avg=82670.08, stdev=313094.33 00:25:37.151 clat (msec): min=56, max=10052, avg=2868.84, stdev=2531.26 00:25:37.151 lat (msec): min=137, max=10059, avg=2951.51, stdev=2601.04 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 138], 5.00th=[ 330], 10.00th=[ 426], 20.00th=[ 852], 00:25:37.151 | 30.00th=[ 1217], 40.00th=[ 1485], 50.00th=[ 2802], 60.00th=[ 3037], 00:25:37.151 | 70.00th=[ 3406], 80.00th=[ 3809], 90.00th=[ 6007], 95.00th=[ 9866], 00:25:37.151 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:25:37.151 | 99.99th=[10000] 00:25:37.151 lat (msec) : 100=0.83%, 250=2.48%, 500=8.26%, 750=4.13%, 1000=8.26% 00:25:37.151 lat (msec) : 2000=23.14%, >=2000=52.89% 00:25:37.151 cpu : usr=0.00%, sys=0.62%, ctx=468, majf=0, minf=30977 00:25:37.151 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:37.151 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387723: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=23, BW=23.1MiB/s (24.2MB/s)(233MiB/10079msec) 00:25:37.151 slat (usec): min=72, max=2078.6k, avg=42947.73, stdev=251934.83 00:25:37.151 clat (msec): min=69, max=9166, avg=1659.58, stdev=2284.87 00:25:37.151 lat (msec): min=91, max=9175, avg=1702.52, stdev=2336.76 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 92], 5.00th=[ 153], 10.00th=[ 266], 20.00th=[ 430], 00:25:37.151 | 30.00th=[ 625], 40.00th=[ 810], 50.00th=[ 978], 60.00th=[ 995], 00:25:37.151 | 70.00th=[ 1003], 80.00th=[ 1133], 90.00th=[ 5336], 95.00th=[ 9060], 00:25:37.151 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9194], 99.95th=[ 9194], 00:25:37.151 | 99.99th=[ 9194] 00:25:37.151 bw ( KiB/s): min=88064, max=129024, per=2.67%, avg=108544.00, stdev=28963.09, samples=2 00:25:37.151 iops : min= 86, max= 126, avg=106.00, stdev=28.28, samples=2 00:25:37.151 lat (msec) : 100=1.72%, 250=7.30%, 500=13.73%, 750=13.73%, 1000=31.33% 00:25:37.151 lat (msec) : 2000=14.16%, >=2000=18.03% 00:25:37.151 cpu : usr=0.04%, sys=1.25%, ctx=283, majf=0, minf=32769 00:25:37.151 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.9%, 32=13.7%, >=64=73.0% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:25:37.151 issued rwts: total=233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387724: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=99, BW=99.5MiB/s (104MB/s)(1007MiB/10122msec) 00:25:37.151 slat (usec): min=38, max=2092.2k, avg=9953.84, stdev=80301.08 00:25:37.151 clat (msec): min=92, max=5708, avg=1241.47, stdev=1536.83 00:25:37.151 lat (msec): min=159, max=5753, avg=1251.42, stdev=1545.05 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 264], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 275], 00:25:37.151 | 30.00th=[ 330], 40.00th=[ 393], 50.00th=[ 409], 60.00th=[ 550], 00:25:37.151 | 70.00th=[ 1083], 80.00th=[ 2072], 90.00th=[ 4732], 95.00th=[ 4866], 00:25:37.151 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 5738], 99.95th=[ 5738], 00:25:37.151 | 99.99th=[ 5738] 00:25:37.151 bw ( KiB/s): min= 2048, max=477184, per=3.16%, avg=128511.21, stdev=145568.94, samples=14 00:25:37.151 iops : min= 2, max= 466, avg=125.29, stdev=142.21, samples=14 00:25:37.151 lat (msec) : 100=0.10%, 250=0.30%, 500=54.12%, 750=11.02%, 1000=3.38% 00:25:37.151 lat (msec) : 2000=7.94%, >=2000=23.14% 00:25:37.151 cpu : usr=0.06%, sys=1.57%, ctx=1579, majf=0, minf=32331 00:25:37.151 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.151 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387725: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=55, BW=55.2MiB/s (57.9MB/s)(560MiB/10141msec) 00:25:37.151 slat (usec): min=40, max=2109.7k, avg=17931.09, stdev=143269.53 00:25:37.151 clat (msec): min=96, max=6207, avg=1788.42, stdev=2062.86 00:25:37.151 lat (msec): min=151, max=6210, avg=1806.35, stdev=2069.28 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 182], 5.00th=[ 418], 10.00th=[ 422], 20.00th=[ 430], 00:25:37.151 | 30.00th=[ 439], 40.00th=[ 472], 50.00th=[ 575], 60.00th=[ 768], 00:25:37.151 | 70.00th=[ 1603], 80.00th=[ 4732], 90.00th=[ 5805], 95.00th=[ 6141], 00:25:37.151 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:25:37.151 | 99.99th=[ 6208] 00:25:37.151 bw ( KiB/s): min=20480, max=311296, per=2.41%, avg=98304.00, stdev=117161.89, samples=9 00:25:37.151 iops : min= 20, max= 304, avg=96.00, stdev=114.42, samples=9 00:25:37.151 lat (msec) : 100=0.18%, 250=1.96%, 500=39.46%, 750=17.14%, 1000=6.43% 00:25:37.151 lat (msec) : 2000=8.21%, >=2000=26.61% 00:25:37.151 cpu : usr=0.00%, sys=1.06%, ctx=1095, majf=0, minf=32769 00:25:37.151 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.151 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387726: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=49, BW=49.8MiB/s (52.2MB/s)(501MiB/10060msec) 00:25:37.151 slat (usec): min=36, max=2113.7k, avg=19964.57, stdev=123023.80 00:25:37.151 clat (msec): min=54, max=5942, avg=1982.28, stdev=1287.21 00:25:37.151 lat (msec): min=78, max=6008, avg=2002.25, stdev=1295.55 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 129], 5.00th=[ 651], 10.00th=[ 667], 20.00th=[ 709], 00:25:37.151 | 30.00th=[ 776], 40.00th=[ 1036], 50.00th=[ 1770], 60.00th=[ 2400], 00:25:37.151 | 70.00th=[ 2668], 80.00th=[ 3339], 90.00th=[ 3943], 95.00th=[ 4329], 00:25:37.151 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 5940], 99.95th=[ 5940], 00:25:37.151 | 99.99th=[ 5940] 00:25:37.151 bw ( KiB/s): min= 4096, max=188416, per=1.71%, avg=69632.00, stdev=60372.59, samples=11 00:25:37.151 iops : min= 4, max= 184, avg=68.00, stdev=58.96, samples=11 00:25:37.151 lat (msec) : 100=0.60%, 250=1.20%, 500=1.00%, 750=23.75%, 1000=11.98% 00:25:37.151 lat (msec) : 2000=13.37%, >=2000=48.10% 00:25:37.151 cpu : usr=0.05%, sys=1.07%, ctx=1500, majf=0, minf=32769 00:25:37.151 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.151 issued rwts: total=501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387727: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=83, BW=83.2MiB/s (87.3MB/s)(839MiB/10082msec) 00:25:37.151 slat (usec): min=32, max=1630.6k, avg=11938.91, stdev=58815.28 00:25:37.151 clat (msec): min=59, max=2845, avg=1200.89, stdev=441.24 00:25:37.151 lat (msec): min=119, max=2852, avg=1212.83, stdev=444.55 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 132], 5.00th=[ 768], 10.00th=[ 869], 20.00th=[ 902], 00:25:37.151 | 30.00th=[ 953], 40.00th=[ 986], 50.00th=[ 1011], 60.00th=[ 1133], 00:25:37.151 | 70.00th=[ 1368], 80.00th=[ 1636], 90.00th=[ 1871], 95.00th=[ 1938], 00:25:37.151 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2836], 99.95th=[ 2836], 00:25:37.151 | 99.99th=[ 2836] 00:25:37.151 bw ( KiB/s): min=43008, max=147456, per=2.38%, avg=97051.13, stdev=41373.77, samples=15 00:25:37.151 iops : min= 42, max= 144, avg=94.67, stdev=40.50, samples=15 00:25:37.151 lat (msec) : 100=0.12%, 250=1.55%, 500=1.31%, 750=1.67%, 1000=43.74% 00:25:37.151 lat (msec) : 2000=47.79%, >=2000=3.81% 00:25:37.151 cpu : usr=0.08%, sys=1.70%, ctx=1212, majf=0, minf=32769 00:25:37.151 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.151 issued rwts: total=839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387728: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=56, BW=56.4MiB/s (59.2MB/s)(566MiB/10031msec) 00:25:37.151 slat (usec): min=52, max=2129.4k, avg=17672.86, stdev=122679.00 00:25:37.151 clat (msec): min=25, max=5838, avg=2076.03, stdev=1764.03 00:25:37.151 lat (msec): min=41, max=5852, avg=2093.70, stdev=1769.38 00:25:37.151 clat percentiles (msec): 00:25:37.151 | 1.00th=[ 62], 5.00th=[ 609], 10.00th=[ 877], 20.00th=[ 919], 00:25:37.151 | 30.00th=[ 995], 40.00th=[ 1267], 50.00th=[ 1334], 60.00th=[ 1385], 00:25:37.151 | 70.00th=[ 1519], 80.00th=[ 5067], 90.00th=[ 5336], 95.00th=[ 5537], 00:25:37.151 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5873], 99.95th=[ 5873], 00:25:37.151 | 99.99th=[ 5873] 00:25:37.151 bw ( KiB/s): min= 2048, max=153600, per=1.89%, avg=77065.09, stdev=50592.21, samples=11 00:25:37.151 iops : min= 2, max= 150, avg=75.18, stdev=49.41, samples=11 00:25:37.151 lat (msec) : 50=0.71%, 100=0.71%, 250=0.88%, 500=1.94%, 750=2.30% 00:25:37.151 lat (msec) : 1000=24.03%, 2000=46.82%, >=2000=22.61% 00:25:37.151 cpu : usr=0.04%, sys=1.15%, ctx=992, majf=0, minf=32769 00:25:37.151 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.9% 00:25:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.151 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.151 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.151 job2: (groupid=0, jobs=1): err= 0: pid=387729: Sun Nov 17 04:42:13 2024 00:25:37.151 read: IOPS=76, BW=76.6MiB/s (80.3MB/s)(773MiB/10089msec) 00:25:37.151 slat (usec): min=34, max=1496.2k, avg=12945.70, stdev=72121.36 00:25:37.151 clat (msec): min=78, max=5013, avg=1560.51, stdev=1378.95 00:25:37.151 lat (msec): min=135, max=5016, avg=1573.46, stdev=1383.19 00:25:37.151 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 262], 5.00th=[ 305], 10.00th=[ 326], 20.00th=[ 575], 00:25:37.152 | 30.00th=[ 651], 40.00th=[ 919], 50.00th=[ 1167], 60.00th=[ 1385], 00:25:37.152 | 70.00th=[ 1569], 80.00th=[ 1787], 90.00th=[ 4530], 95.00th=[ 4866], 00:25:37.152 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:25:37.152 | 99.99th=[ 5000] 00:25:37.152 bw ( KiB/s): min=14336, max=329728, per=2.50%, avg=101761.62, stdev=91568.82, samples=13 00:25:37.152 iops : min= 14, max= 322, avg=99.31, stdev=89.46, samples=13 00:25:37.152 lat (msec) : 100=0.13%, 250=0.65%, 500=14.36%, 750=21.09%, 1000=5.95% 00:25:37.152 lat (msec) : 2000=40.49%, >=2000=17.34% 00:25:37.152 cpu : usr=0.02%, sys=1.62%, ctx=1792, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.152 issued rwts: total=773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387730: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=35, BW=35.7MiB/s (37.4MB/s)(359MiB/10069msec) 00:25:37.152 slat (usec): min=34, max=2040.6k, avg=27888.65, stdev=177741.23 00:25:37.152 clat (msec): min=54, max=7902, avg=3332.35, stdev=2927.81 00:25:37.152 lat (msec): min=79, max=7915, avg=3360.24, stdev=2933.86 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 96], 5.00th=[ 363], 10.00th=[ 701], 20.00th=[ 1011], 00:25:37.152 | 30.00th=[ 1062], 40.00th=[ 1116], 50.00th=[ 1267], 60.00th=[ 2970], 00:25:37.152 | 70.00th=[ 6007], 80.00th=[ 7349], 90.00th=[ 7684], 95.00th=[ 7819], 00:25:37.152 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:25:37.152 | 99.99th=[ 7886] 00:25:37.152 bw ( KiB/s): min=22528, max=112640, per=1.30%, avg=52754.67, stdev=28642.33, samples=9 00:25:37.152 iops : min= 22, max= 110, avg=51.33, stdev=28.10, samples=9 00:25:37.152 lat (msec) : 100=1.11%, 250=1.95%, 500=2.79%, 750=4.18%, 1000=8.08% 00:25:37.152 lat (msec) : 2000=40.11%, >=2000=41.78% 00:25:37.152 cpu : usr=0.01%, sys=1.00%, ctx=775, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.5% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.152 issued rwts: total=359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387731: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=137, BW=137MiB/s (144MB/s)(1376MiB/10033msec) 00:25:37.152 slat (usec): min=35, max=175920, avg=7266.29, stdev=13186.19 00:25:37.152 clat (msec): min=25, max=2335, avg=880.23, stdev=522.28 00:25:37.152 lat (msec): min=34, max=2343, avg=887.50, stdev=524.66 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 153], 5.00th=[ 426], 10.00th=[ 472], 20.00th=[ 502], 00:25:37.152 | 30.00th=[ 535], 40.00th=[ 592], 50.00th=[ 659], 60.00th=[ 751], 00:25:37.152 | 70.00th=[ 835], 80.00th=[ 1368], 90.00th=[ 1787], 95.00th=[ 2039], 00:25:37.152 | 99.00th=[ 2299], 99.50th=[ 2333], 99.90th=[ 2333], 99.95th=[ 2333], 00:25:37.152 | 99.99th=[ 2333] 00:25:37.152 bw ( KiB/s): min=36864, max=301056, per=3.41%, avg=138688.28, stdev=87082.36, samples=18 00:25:37.152 iops : min= 36, max= 294, avg=135.39, stdev=85.09, samples=18 00:25:37.152 lat (msec) : 50=0.36%, 100=0.22%, 250=0.80%, 500=16.35%, 750=42.08% 00:25:37.152 lat (msec) : 1000=13.88%, 2000=20.06%, >=2000=6.25% 00:25:37.152 cpu : usr=0.02%, sys=2.32%, ctx=2108, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.152 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387732: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=81, BW=82.0MiB/s (85.9MB/s)(830MiB/10126msec) 00:25:37.152 slat (usec): min=39, max=1738.6k, avg=12057.87, stdev=77108.98 00:25:37.152 clat (msec): min=113, max=4791, avg=1395.55, stdev=1352.11 00:25:37.152 lat (msec): min=189, max=4797, avg=1407.60, stdev=1358.07 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 296], 5.00th=[ 300], 10.00th=[ 313], 20.00th=[ 321], 00:25:37.152 | 30.00th=[ 380], 40.00th=[ 435], 50.00th=[ 818], 60.00th=[ 1485], 00:25:37.152 | 70.00th=[ 1787], 80.00th=[ 1938], 90.00th=[ 4463], 95.00th=[ 4665], 00:25:37.152 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:37.152 | 99.99th=[ 4799] 00:25:37.152 bw ( KiB/s): min=26570, max=389120, per=2.95%, avg=119967.00, stdev=120884.81, samples=12 00:25:37.152 iops : min= 25, max= 380, avg=117.00, stdev=118.17, samples=12 00:25:37.152 lat (msec) : 250=0.84%, 500=41.57%, 750=5.66%, 1000=4.70%, 2000=28.31% 00:25:37.152 lat (msec) : >=2000=18.92% 00:25:37.152 cpu : usr=0.02%, sys=1.83%, ctx=2114, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.152 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387733: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=46, BW=46.2MiB/s (48.5MB/s)(467MiB/10104msec) 00:25:37.152 slat (usec): min=69, max=2062.9k, avg=21413.99, stdev=154445.47 00:25:37.152 clat (msec): min=99, max=7304, avg=1423.08, stdev=1538.16 00:25:37.152 lat (msec): min=117, max=7327, avg=1444.50, stdev=1562.54 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 148], 5.00th=[ 288], 10.00th=[ 435], 20.00th=[ 810], 00:25:37.152 | 30.00th=[ 894], 40.00th=[ 927], 50.00th=[ 944], 60.00th=[ 1003], 00:25:37.152 | 70.00th=[ 1234], 80.00th=[ 1418], 90.00th=[ 1586], 95.00th=[ 5537], 00:25:37.152 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7282], 99.95th=[ 7282], 00:25:37.152 | 99.99th=[ 7282] 00:25:37.152 bw ( KiB/s): min=57344, max=157696, per=2.85%, avg=116053.33, stdev=35346.05, samples=6 00:25:37.152 iops : min= 56, max= 154, avg=113.33, stdev=34.52, samples=6 00:25:37.152 lat (msec) : 100=0.21%, 250=4.07%, 500=7.28%, 750=6.64%, 1000=41.54% 00:25:37.152 lat (msec) : 2000=30.41%, >=2000=9.85% 00:25:37.152 cpu : usr=0.04%, sys=1.60%, ctx=577, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.152 issued rwts: total=467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387734: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=30, BW=30.5MiB/s (31.9MB/s)(308MiB/10111msec) 00:25:37.152 slat (usec): min=39, max=2086.1k, avg=32526.99, stdev=192812.81 00:25:37.152 clat (msec): min=90, max=7608, avg=2320.83, stdev=2088.16 00:25:37.152 lat (msec): min=135, max=7619, avg=2353.36, stdev=2105.68 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 157], 5.00th=[ 460], 10.00th=[ 810], 20.00th=[ 1351], 00:25:37.152 | 30.00th=[ 1552], 40.00th=[ 1586], 50.00th=[ 1603], 60.00th=[ 1670], 00:25:37.152 | 70.00th=[ 1821], 80.00th=[ 2005], 90.00th=[ 7416], 95.00th=[ 7550], 00:25:37.152 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7617], 99.95th=[ 7617], 00:25:37.152 | 99.99th=[ 7617] 00:25:37.152 bw ( KiB/s): min=26677, max=90292, per=1.51%, avg=61478.83, stdev=25931.07, samples=6 00:25:37.152 iops : min= 26, max= 88, avg=60.00, stdev=25.30, samples=6 00:25:37.152 lat (msec) : 100=0.32%, 250=2.27%, 500=3.25%, 750=2.92%, 1000=4.55% 00:25:37.152 lat (msec) : 2000=66.23%, >=2000=20.45% 00:25:37.152 cpu : usr=0.02%, sys=1.53%, ctx=676, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.152 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:37.152 issued rwts: total=308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.152 job2: (groupid=0, jobs=1): err= 0: pid=387735: Sun Nov 17 04:42:13 2024 00:25:37.152 read: IOPS=61, BW=61.4MiB/s (64.4MB/s)(618MiB/10066msec) 00:25:37.152 slat (usec): min=36, max=2102.7k, avg=16187.19, stdev=138215.97 00:25:37.152 clat (msec): min=59, max=7085, avg=1002.07, stdev=1440.83 00:25:37.152 lat (msec): min=70, max=7105, avg=1018.25, stdev=1461.01 00:25:37.152 clat percentiles (msec): 00:25:37.152 | 1.00th=[ 163], 5.00th=[ 271], 10.00th=[ 321], 20.00th=[ 351], 00:25:37.152 | 30.00th=[ 388], 40.00th=[ 405], 50.00th=[ 414], 60.00th=[ 592], 00:25:37.152 | 70.00th=[ 835], 80.00th=[ 1401], 90.00th=[ 1754], 95.00th=[ 5201], 00:25:37.152 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:25:37.152 | 99.99th=[ 7080] 00:25:37.152 bw ( KiB/s): min=36864, max=331776, per=4.11%, avg=167453.17, stdev=128923.03, samples=6 00:25:37.152 iops : min= 36, max= 324, avg=163.50, stdev=125.93, samples=6 00:25:37.152 lat (msec) : 100=0.32%, 250=1.62%, 500=54.21%, 750=10.84%, 1000=6.80% 00:25:37.152 lat (msec) : 2000=20.87%, >=2000=5.34% 00:25:37.152 cpu : usr=0.02%, sys=1.04%, ctx=1632, majf=0, minf=32769 00:25:37.152 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:25:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.153 issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387736: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=94, BW=94.3MiB/s (98.9MB/s)(951MiB/10085msec) 00:25:37.153 slat (usec): min=32, max=1697.1k, avg=10509.98, stdev=71410.37 00:25:37.153 clat (msec): min=82, max=3092, avg=1137.69, stdev=733.98 00:25:37.153 lat (msec): min=86, max=3094, avg=1148.20, stdev=736.12 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 279], 5.00th=[ 321], 10.00th=[ 460], 20.00th=[ 844], 00:25:37.153 | 30.00th=[ 885], 40.00th=[ 894], 50.00th=[ 902], 60.00th=[ 911], 00:25:37.153 | 70.00th=[ 919], 80.00th=[ 1053], 90.00th=[ 2567], 95.00th=[ 3004], 00:25:37.153 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:37.153 | 99.99th=[ 3104] 00:25:37.153 bw ( KiB/s): min= 4096, max=346112, per=2.96%, avg=120500.50, stdev=81887.65, samples=14 00:25:37.153 iops : min= 4, max= 338, avg=117.57, stdev=79.95, samples=14 00:25:37.153 lat (msec) : 100=0.21%, 250=0.42%, 500=10.20%, 750=6.31%, 1000=62.67% 00:25:37.153 lat (msec) : 2000=3.89%, >=2000=16.30% 00:25:37.153 cpu : usr=0.11%, sys=2.12%, ctx=1035, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.153 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387737: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=34, BW=34.5MiB/s (36.1MB/s)(349MiB/10123msec) 00:25:37.153 slat (usec): min=35, max=2100.1k, avg=28681.63, stdev=190276.34 00:25:37.153 clat (msec): min=111, max=8478, avg=3508.36, stdev=3365.89 00:25:37.153 lat (msec): min=134, max=8483, avg=3537.04, stdev=3371.22 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 215], 5.00th=[ 542], 10.00th=[ 751], 20.00th=[ 835], 00:25:37.153 | 30.00th=[ 936], 40.00th=[ 944], 50.00th=[ 969], 60.00th=[ 1552], 00:25:37.153 | 70.00th=[ 7483], 80.00th=[ 8154], 90.00th=[ 8288], 95.00th=[ 8356], 00:25:37.153 | 99.00th=[ 8423], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:25:37.153 | 99.99th=[ 8490] 00:25:37.153 bw ( KiB/s): min= 2048, max=139264, per=1.11%, avg=45262.40, stdev=51325.54, samples=10 00:25:37.153 iops : min= 2, max= 136, avg=44.20, stdev=50.12, samples=10 00:25:37.153 lat (msec) : 250=1.43%, 500=2.87%, 750=5.44%, 1000=42.41%, 2000=9.17% 00:25:37.153 lat (msec) : >=2000=38.68% 00:25:37.153 cpu : usr=0.01%, sys=1.11%, ctx=979, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.153 issued rwts: total=349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387738: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(375MiB/10092msec) 00:25:37.153 slat (usec): min=36, max=2065.0k, avg=26695.79, stdev=171875.86 00:25:37.153 clat (msec): min=77, max=5056, avg=2394.57, stdev=1541.44 00:25:37.153 lat (msec): min=182, max=5061, avg=2421.27, stdev=1543.51 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 241], 5.00th=[ 659], 10.00th=[ 1011], 20.00th=[ 1045], 00:25:37.153 | 30.00th=[ 1070], 40.00th=[ 1116], 50.00th=[ 1234], 60.00th=[ 3406], 00:25:37.153 | 70.00th=[ 3608], 80.00th=[ 3943], 90.00th=[ 4933], 95.00th=[ 5000], 00:25:37.153 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:37.153 | 99.99th=[ 5067] 00:25:37.153 bw ( KiB/s): min= 6144, max=128766, per=1.38%, avg=56152.11, stdev=43870.36, samples=9 00:25:37.153 iops : min= 6, max= 125, avg=54.67, stdev=42.56, samples=9 00:25:37.153 lat (msec) : 100=0.27%, 250=0.80%, 500=1.87%, 750=2.40%, 1000=2.93% 00:25:37.153 lat (msec) : 2000=45.87%, >=2000=45.87% 00:25:37.153 cpu : usr=0.02%, sys=1.34%, ctx=804, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.153 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387739: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=41, BW=41.5MiB/s (43.6MB/s)(419MiB/10087msec) 00:25:37.153 slat (usec): min=40, max=1795.0k, avg=23894.75, stdev=148822.28 00:25:37.153 clat (msec): min=72, max=7910, avg=2840.93, stdev=2645.39 00:25:37.153 lat (msec): min=93, max=7912, avg=2864.83, stdev=2652.78 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 109], 5.00th=[ 262], 10.00th=[ 617], 20.00th=[ 651], 00:25:37.153 | 30.00th=[ 785], 40.00th=[ 986], 50.00th=[ 1200], 60.00th=[ 2702], 00:25:37.153 | 70.00th=[ 4597], 80.00th=[ 5805], 90.00th=[ 7483], 95.00th=[ 7752], 00:25:37.153 | 99.00th=[ 7819], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:25:37.153 | 99.99th=[ 7886] 00:25:37.153 bw ( KiB/s): min=10240, max=206848, per=1.47%, avg=59642.80, stdev=55818.58, samples=10 00:25:37.153 iops : min= 10, max= 202, avg=58.20, stdev=54.50, samples=10 00:25:37.153 lat (msec) : 100=0.48%, 250=3.82%, 500=3.58%, 750=20.76%, 1000=12.41% 00:25:37.153 lat (msec) : 2000=15.99%, >=2000=42.96% 00:25:37.153 cpu : usr=0.02%, sys=1.12%, ctx=820, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.153 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387740: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=36, BW=36.2MiB/s (38.0MB/s)(365MiB/10078msec) 00:25:37.153 slat (usec): min=440, max=1938.2k, avg=27405.57, stdev=134214.31 00:25:37.153 clat (msec): min=72, max=6706, avg=3220.13, stdev=2271.24 00:25:37.153 lat (msec): min=89, max=6712, avg=3247.53, stdev=2274.28 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 132], 5.00th=[ 575], 10.00th=[ 1133], 20.00th=[ 1217], 00:25:37.153 | 30.00th=[ 1469], 40.00th=[ 1804], 50.00th=[ 2265], 60.00th=[ 2534], 00:25:37.153 | 70.00th=[ 5805], 80.00th=[ 6275], 90.00th=[ 6477], 95.00th=[ 6611], 00:25:37.153 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:25:37.153 | 99.99th=[ 6678] 00:25:37.153 bw ( KiB/s): min=10240, max=71680, per=1.00%, avg=40609.33, stdev=17138.79, samples=12 00:25:37.153 iops : min= 10, max= 70, avg=39.50, stdev=16.87, samples=12 00:25:37.153 lat (msec) : 100=0.82%, 250=1.92%, 500=1.92%, 750=1.64%, 1000=2.19% 00:25:37.153 lat (msec) : 2000=35.89%, >=2000=55.62% 00:25:37.153 cpu : usr=0.04%, sys=1.28%, ctx=1053, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.153 issued rwts: total=365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387741: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=46, BW=46.5MiB/s (48.8MB/s)(470MiB/10106msec) 00:25:37.153 slat (usec): min=46, max=2065.1k, avg=21273.94, stdev=156152.45 00:25:37.153 clat (msec): min=103, max=5197, avg=1572.25, stdev=1111.41 00:25:37.153 lat (msec): min=136, max=5204, avg=1593.52, stdev=1125.44 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 163], 5.00th=[ 305], 10.00th=[ 468], 20.00th=[ 827], 00:25:37.153 | 30.00th=[ 986], 40.00th=[ 995], 50.00th=[ 1053], 60.00th=[ 1099], 00:25:37.153 | 70.00th=[ 1301], 80.00th=[ 3037], 90.00th=[ 3071], 95.00th=[ 3104], 00:25:37.153 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:25:37.153 | 99.99th=[ 5201] 00:25:37.153 bw ( KiB/s): min=53248, max=131072, per=2.47%, avg=100352.00, stdev=36097.51, samples=7 00:25:37.153 iops : min= 52, max= 128, avg=98.00, stdev=35.25, samples=7 00:25:37.153 lat (msec) : 250=3.40%, 500=6.81%, 750=6.81%, 1000=25.32%, 2000=27.66% 00:25:37.153 lat (msec) : >=2000=30.00% 00:25:37.153 cpu : usr=0.05%, sys=1.37%, ctx=674, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.153 issued rwts: total=470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387742: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=33, BW=33.1MiB/s (34.7MB/s)(336MiB/10145msec) 00:25:37.153 slat (usec): min=35, max=2150.9k, avg=29857.09, stdev=183488.00 00:25:37.153 clat (msec): min=111, max=8289, avg=3592.92, stdev=2776.51 00:25:37.153 lat (msec): min=157, max=8290, avg=3622.77, stdev=2781.02 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 249], 5.00th=[ 609], 10.00th=[ 961], 20.00th=[ 1053], 00:25:37.153 | 30.00th=[ 1234], 40.00th=[ 1351], 50.00th=[ 2970], 60.00th=[ 3842], 00:25:37.153 | 70.00th=[ 5201], 80.00th=[ 7282], 90.00th=[ 7886], 95.00th=[ 8154], 00:25:37.153 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:25:37.153 | 99.99th=[ 8288] 00:25:37.153 bw ( KiB/s): min= 4096, max=141312, per=1.05%, avg=42598.40, stdev=38759.60, samples=10 00:25:37.153 iops : min= 4, max= 138, avg=41.60, stdev=37.85, samples=10 00:25:37.153 lat (msec) : 250=1.19%, 500=2.68%, 750=2.98%, 1000=9.23%, 2000=33.63% 00:25:37.153 lat (msec) : >=2000=50.30% 00:25:37.153 cpu : usr=0.00%, sys=1.06%, ctx=724, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.5%, >=64=81.2% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:37.153 issued rwts: total=336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387743: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=41, BW=41.5MiB/s (43.5MB/s)(419MiB/10107msec) 00:25:37.153 slat (usec): min=33, max=2166.9k, avg=23865.91, stdev=177927.28 00:25:37.153 clat (msec): min=104, max=8143, avg=2931.39, stdev=3229.35 00:25:37.153 lat (msec): min=106, max=8152, avg=2955.25, stdev=3237.72 00:25:37.153 clat percentiles (msec): 00:25:37.153 | 1.00th=[ 109], 5.00th=[ 215], 10.00th=[ 334], 20.00th=[ 634], 00:25:37.153 | 30.00th=[ 726], 40.00th=[ 894], 50.00th=[ 995], 60.00th=[ 1234], 00:25:37.153 | 70.00th=[ 5403], 80.00th=[ 7819], 90.00th=[ 7953], 95.00th=[ 8087], 00:25:37.153 | 99.00th=[ 8087], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:25:37.153 | 99.99th=[ 8154] 00:25:37.153 bw ( KiB/s): min= 2048, max=157381, per=1.63%, avg=66411.22, stdev=49154.86, samples=9 00:25:37.153 iops : min= 2, max= 153, avg=64.78, stdev=47.84, samples=9 00:25:37.153 lat (msec) : 250=7.40%, 500=7.16%, 750=16.71%, 1000=19.57%, 2000=17.42% 00:25:37.153 lat (msec) : >=2000=31.74% 00:25:37.153 cpu : usr=0.02%, sys=0.99%, ctx=900, majf=0, minf=32769 00:25:37.153 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:25:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.153 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.153 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.153 job3: (groupid=0, jobs=1): err= 0: pid=387744: Sun Nov 17 04:42:13 2024 00:25:37.153 read: IOPS=69, BW=70.0MiB/s (73.4MB/s)(707MiB/10106msec) 00:25:37.154 slat (usec): min=35, max=1957.5k, avg=14175.52, stdev=74650.96 00:25:37.154 clat (msec): min=79, max=4069, avg=1690.55, stdev=976.24 00:25:37.154 lat (msec): min=131, max=4071, avg=1704.72, stdev=977.58 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 275], 5.00th=[ 600], 10.00th=[ 885], 20.00th=[ 936], 00:25:37.154 | 30.00th=[ 1217], 40.00th=[ 1368], 50.00th=[ 1485], 60.00th=[ 1552], 00:25:37.154 | 70.00th=[ 1620], 80.00th=[ 1737], 90.00th=[ 3708], 95.00th=[ 3842], 00:25:37.154 | 99.00th=[ 4044], 99.50th=[ 4077], 99.90th=[ 4077], 99.95th=[ 4077], 00:25:37.154 | 99.99th=[ 4077] 00:25:37.154 bw ( KiB/s): min=12288, max=143360, per=1.94%, avg=79015.93, stdev=39150.53, samples=15 00:25:37.154 iops : min= 12, max= 140, avg=77.13, stdev=38.24, samples=15 00:25:37.154 lat (msec) : 100=0.14%, 250=0.71%, 500=3.68%, 750=1.56%, 1000=19.09% 00:25:37.154 lat (msec) : 2000=56.86%, >=2000=17.96% 00:25:37.154 cpu : usr=0.02%, sys=1.47%, ctx=1298, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.154 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job3: (groupid=0, jobs=1): err= 0: pid=387745: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=27, BW=27.3MiB/s (28.6MB/s)(275MiB/10080msec) 00:25:37.154 slat (usec): min=34, max=2113.7k, avg=36499.60, stdev=206076.37 00:25:37.154 clat (msec): min=41, max=8437, avg=4289.36, stdev=3399.45 00:25:37.154 lat (msec): min=131, max=8454, avg=4325.86, stdev=3400.51 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 132], 5.00th=[ 150], 10.00th=[ 514], 20.00th=[ 885], 00:25:37.154 | 30.00th=[ 1318], 40.00th=[ 1485], 50.00th=[ 3373], 60.00th=[ 7550], 00:25:37.154 | 70.00th=[ 7953], 80.00th=[ 8087], 90.00th=[ 8288], 95.00th=[ 8356], 00:25:37.154 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:25:37.154 | 99.99th=[ 8423] 00:25:37.154 bw ( KiB/s): min= 2048, max=67584, per=0.74%, avg=30100.20, stdev=22564.33, samples=10 00:25:37.154 iops : min= 2, max= 66, avg=29.20, stdev=22.20, samples=10 00:25:37.154 lat (msec) : 50=0.36%, 250=6.55%, 500=2.91%, 750=4.36%, 1000=6.91% 00:25:37.154 lat (msec) : 2000=28.36%, >=2000=50.55% 00:25:37.154 cpu : usr=0.00%, sys=1.20%, ctx=727, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.1% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:37.154 issued rwts: total=275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job3: (groupid=0, jobs=1): err= 0: pid=387746: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=34, BW=34.7MiB/s (36.3MB/s)(350MiB/10099msec) 00:25:37.154 slat (usec): min=49, max=2174.9k, avg=28611.27, stdev=196369.88 00:25:37.154 clat (msec): min=82, max=8405, avg=3520.00, stdev=3356.55 00:25:37.154 lat (msec): min=101, max=8413, avg=3548.61, stdev=3361.82 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 159], 5.00th=[ 489], 10.00th=[ 676], 20.00th=[ 961], 00:25:37.154 | 30.00th=[ 995], 40.00th=[ 1062], 50.00th=[ 1150], 60.00th=[ 1502], 00:25:37.154 | 70.00th=[ 7617], 80.00th=[ 8020], 90.00th=[ 8221], 95.00th=[ 8288], 00:25:37.154 | 99.00th=[ 8356], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:25:37.154 | 99.99th=[ 8423] 00:25:37.154 bw ( KiB/s): min= 2048, max=135168, per=1.24%, avg=50485.00, stdev=50149.31, samples=9 00:25:37.154 iops : min= 2, max= 132, avg=49.11, stdev=48.91, samples=9 00:25:37.154 lat (msec) : 100=0.29%, 250=1.43%, 500=4.29%, 750=4.86%, 1000=20.86% 00:25:37.154 lat (msec) : 2000=30.86%, >=2000=37.43% 00:25:37.154 cpu : usr=0.00%, sys=1.22%, ctx=965, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.0% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.154 issued rwts: total=350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job3: (groupid=0, jobs=1): err= 0: pid=387747: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=82, BW=82.4MiB/s (86.4MB/s)(834MiB/10120msec) 00:25:37.154 slat (usec): min=38, max=2171.1k, avg=11994.57, stdev=92264.31 00:25:37.154 clat (msec): min=111, max=5640, avg=1450.90, stdev=1348.41 00:25:37.154 lat (msec): min=145, max=6030, avg=1462.89, stdev=1355.76 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 243], 5.00th=[ 409], 10.00th=[ 447], 20.00th=[ 493], 00:25:37.154 | 30.00th=[ 510], 40.00th=[ 584], 50.00th=[ 869], 60.00th=[ 1217], 00:25:37.154 | 70.00th=[ 1284], 80.00th=[ 2903], 90.00th=[ 3406], 95.00th=[ 4597], 00:25:37.154 | 99.00th=[ 5470], 99.50th=[ 5604], 99.90th=[ 5671], 99.95th=[ 5671], 00:25:37.154 | 99.99th=[ 5671] 00:25:37.154 bw ( KiB/s): min=16384, max=307200, per=2.74%, avg=111373.38, stdev=102054.38, samples=13 00:25:37.154 iops : min= 16, max= 300, avg=108.69, stdev=99.72, samples=13 00:25:37.154 lat (msec) : 250=1.08%, 500=26.50%, 750=21.10%, 1000=5.16%, 2000=22.06% 00:25:37.154 lat (msec) : >=2000=24.10% 00:25:37.154 cpu : usr=0.10%, sys=1.66%, ctx=1248, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.154 issued rwts: total=834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job3: (groupid=0, jobs=1): err= 0: pid=387748: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=28, BW=28.6MiB/s (29.9MB/s)(289MiB/10118msec) 00:25:37.154 slat (usec): min=51, max=2115.0k, avg=34687.04, stdev=209671.59 00:25:37.154 clat (msec): min=92, max=8202, avg=3156.79, stdev=2491.20 00:25:37.154 lat (msec): min=126, max=8248, avg=3191.48, stdev=2508.38 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 129], 5.00th=[ 215], 10.00th=[ 502], 20.00th=[ 793], 00:25:37.154 | 30.00th=[ 1250], 40.00th=[ 2534], 50.00th=[ 2601], 60.00th=[ 2735], 00:25:37.154 | 70.00th=[ 2802], 80.00th=[ 6946], 90.00th=[ 7148], 95.00th=[ 7215], 00:25:37.154 | 99.00th=[ 7282], 99.50th=[ 8154], 99.90th=[ 8221], 99.95th=[ 8221], 00:25:37.154 | 99.99th=[ 8221] 00:25:37.154 bw ( KiB/s): min= 6131, max=112640, per=1.35%, avg=54938.00, stdev=44289.50, samples=6 00:25:37.154 iops : min= 5, max= 110, avg=53.17, stdev=43.76, samples=6 00:25:37.154 lat (msec) : 100=0.35%, 250=5.88%, 500=3.81%, 750=8.65%, 1000=8.65% 00:25:37.154 lat (msec) : 2000=7.61%, >=2000=65.05% 00:25:37.154 cpu : usr=0.00%, sys=1.21%, ctx=529, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.1%, >=64=78.2% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:37.154 issued rwts: total=289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job4: (groupid=0, jobs=1): err= 0: pid=387749: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=75, BW=75.8MiB/s (79.5MB/s)(768MiB/10129msec) 00:25:37.154 slat (usec): min=44, max=1708.1k, avg=13025.75, stdev=81264.28 00:25:37.154 clat (msec): min=121, max=4062, avg=1498.39, stdev=1034.40 00:25:37.154 lat (msec): min=143, max=4065, avg=1511.41, stdev=1036.53 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 355], 5.00th=[ 877], 10.00th=[ 885], 20.00th=[ 885], 00:25:37.154 | 30.00th=[ 902], 40.00th=[ 911], 50.00th=[ 911], 60.00th=[ 927], 00:25:37.154 | 70.00th=[ 1167], 80.00th=[ 2567], 90.00th=[ 3373], 95.00th=[ 3842], 00:25:37.154 | 99.00th=[ 4010], 99.50th=[ 4044], 99.90th=[ 4077], 99.95th=[ 4077], 00:25:37.154 | 99.99th=[ 4077] 00:25:37.154 bw ( KiB/s): min=10219, max=157696, per=2.30%, avg=93718.21, stdev=57221.73, samples=14 00:25:37.154 iops : min= 9, max= 154, avg=91.29, stdev=55.95, samples=14 00:25:37.154 lat (msec) : 250=0.78%, 500=0.65%, 750=0.91%, 1000=65.62%, 2000=6.51% 00:25:37.154 lat (msec) : >=2000=25.52% 00:25:37.154 cpu : usr=0.03%, sys=1.62%, ctx=985, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.154 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job4: (groupid=0, jobs=1): err= 0: pid=387750: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=63, BW=63.1MiB/s (66.2MB/s)(639MiB/10125msec) 00:25:37.154 slat (usec): min=35, max=2032.1k, avg=15720.26, stdev=113682.40 00:25:37.154 clat (msec): min=76, max=5754, avg=1868.62, stdev=1778.83 00:25:37.154 lat (msec): min=151, max=5798, avg=1884.34, stdev=1783.89 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 207], 5.00th=[ 376], 10.00th=[ 735], 20.00th=[ 827], 00:25:37.154 | 30.00th=[ 936], 40.00th=[ 1045], 50.00th=[ 1099], 60.00th=[ 1167], 00:25:37.154 | 70.00th=[ 1200], 80.00th=[ 2903], 90.00th=[ 5470], 95.00th=[ 5671], 00:25:37.154 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:25:37.154 | 99.99th=[ 5738] 00:25:37.154 bw ( KiB/s): min= 8192, max=182272, per=2.14%, avg=87210.67, stdev=54349.81, samples=12 00:25:37.154 iops : min= 8, max= 178, avg=85.17, stdev=53.08, samples=12 00:25:37.154 lat (msec) : 100=0.16%, 250=1.25%, 500=5.79%, 750=5.01%, 1000=22.54% 00:25:37.154 lat (msec) : 2000=44.13%, >=2000=21.13% 00:25:37.154 cpu : usr=0.06%, sys=1.31%, ctx=1208, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.154 issued rwts: total=639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.154 job4: (groupid=0, jobs=1): err= 0: pid=387751: Sun Nov 17 04:42:13 2024 00:25:37.154 read: IOPS=101, BW=102MiB/s (107MB/s)(1029MiB/10130msec) 00:25:37.154 slat (usec): min=36, max=1750.7k, avg=9755.09, stdev=56830.51 00:25:37.154 clat (msec): min=86, max=2804, avg=1178.28, stdev=635.66 00:25:37.154 lat (msec): min=142, max=2823, avg=1188.04, stdev=638.11 00:25:37.154 clat percentiles (msec): 00:25:37.154 | 1.00th=[ 167], 5.00th=[ 426], 10.00th=[ 709], 20.00th=[ 735], 00:25:37.154 | 30.00th=[ 818], 40.00th=[ 919], 50.00th=[ 1020], 60.00th=[ 1150], 00:25:37.154 | 70.00th=[ 1284], 80.00th=[ 1368], 90.00th=[ 2635], 95.00th=[ 2702], 00:25:37.154 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2802], 99.95th=[ 2802], 00:25:37.154 | 99.99th=[ 2802] 00:25:37.154 bw ( KiB/s): min=55296, max=188416, per=3.02%, avg=123000.93, stdev=36951.85, samples=15 00:25:37.154 iops : min= 54, max= 184, avg=120.07, stdev=36.10, samples=15 00:25:37.154 lat (msec) : 100=0.10%, 250=2.24%, 500=3.79%, 750=14.29%, 1000=28.77% 00:25:37.154 lat (msec) : 2000=38.48%, >=2000=12.34% 00:25:37.154 cpu : usr=0.06%, sys=1.84%, ctx=1415, majf=0, minf=32769 00:25:37.154 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:25:37.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.155 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387752: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=20, BW=20.3MiB/s (21.2MB/s)(204MiB/10068msec) 00:25:37.155 slat (usec): min=201, max=2130.2k, avg=49019.67, stdev=228835.88 00:25:37.155 clat (msec): min=67, max=8900, avg=5681.21, stdev=3306.83 00:25:37.155 lat (msec): min=101, max=8904, avg=5730.23, stdev=3293.61 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 106], 5.00th=[ 430], 10.00th=[ 802], 20.00th=[ 1385], 00:25:37.155 | 30.00th=[ 2165], 40.00th=[ 6007], 50.00th=[ 8020], 60.00th=[ 8221], 00:25:37.155 | 70.00th=[ 8356], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8792], 00:25:37.155 | 99.00th=[ 8792], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:37.155 | 99.99th=[ 8926] 00:25:37.155 bw ( KiB/s): min= 6144, max=38912, per=0.45%, avg=18426.14, stdev=10963.98, samples=7 00:25:37.155 iops : min= 6, max= 38, avg=17.86, stdev=10.68, samples=7 00:25:37.155 lat (msec) : 100=0.49%, 250=2.45%, 500=3.43%, 750=2.94%, 1000=2.45% 00:25:37.155 lat (msec) : 2000=15.20%, >=2000=73.04% 00:25:37.155 cpu : usr=0.00%, sys=0.91%, ctx=751, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:25:37.155 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387753: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=15, BW=15.8MiB/s (16.5MB/s)(160MiB/10141msec) 00:25:37.155 slat (usec): min=54, max=2078.8k, avg=62713.03, stdev=288593.47 00:25:37.155 clat (msec): min=106, max=9537, avg=3062.81, stdev=3015.44 00:25:37.155 lat (msec): min=157, max=9567, avg=3125.53, stdev=3054.03 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 159], 5.00th=[ 342], 10.00th=[ 659], 20.00th=[ 953], 00:25:37.155 | 30.00th=[ 1301], 40.00th=[ 1435], 50.00th=[ 1502], 60.00th=[ 1754], 00:25:37.155 | 70.00th=[ 3440], 80.00th=[ 3943], 90.00th=[ 9329], 95.00th=[ 9463], 00:25:37.155 | 99.00th=[ 9463], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:25:37.155 | 99.99th=[ 9597] 00:25:37.155 bw ( KiB/s): min=28672, max=36864, per=0.80%, avg=32768.00, stdev=5792.62, samples=2 00:25:37.155 iops : min= 28, max= 36, avg=32.00, stdev= 5.66, samples=2 00:25:37.155 lat (msec) : 250=1.88%, 500=5.62%, 750=10.00%, 1000=2.50%, 2000=42.50% 00:25:37.155 lat (msec) : >=2000=37.50% 00:25:37.155 cpu : usr=0.00%, sys=0.75%, ctx=406, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:25:37.155 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387754: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=38, BW=38.7MiB/s (40.6MB/s)(390MiB/10082msec) 00:25:37.155 slat (usec): min=35, max=2057.4k, avg=25671.63, stdev=156251.60 00:25:37.155 clat (msec): min=67, max=5263, avg=1909.09, stdev=1315.40 00:25:37.155 lat (msec): min=98, max=5269, avg=1934.76, stdev=1325.37 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 106], 5.00th=[ 531], 10.00th=[ 667], 20.00th=[ 869], 00:25:37.155 | 30.00th=[ 919], 40.00th=[ 978], 50.00th=[ 1099], 60.00th=[ 1418], 00:25:37.155 | 70.00th=[ 3339], 80.00th=[ 3507], 90.00th=[ 3675], 95.00th=[ 3809], 00:25:37.155 | 99.00th=[ 5201], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:25:37.155 | 99.99th=[ 5269] 00:25:37.155 bw ( KiB/s): min=14336, max=153600, per=1.65%, avg=67273.12, stdev=53233.56, samples=8 00:25:37.155 iops : min= 14, max= 150, avg=65.62, stdev=52.01, samples=8 00:25:37.155 lat (msec) : 100=0.51%, 250=1.28%, 500=1.54%, 750=7.69%, 1000=31.28% 00:25:37.155 lat (msec) : 2000=20.26%, >=2000=37.44% 00:25:37.155 cpu : usr=0.03%, sys=1.15%, ctx=807, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.155 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387755: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=52, BW=52.2MiB/s (54.7MB/s)(528MiB/10122msec) 00:25:37.155 slat (usec): min=60, max=2055.8k, avg=18936.67, stdev=124679.58 00:25:37.155 clat (msec): min=120, max=8086, avg=2323.17, stdev=1368.33 00:25:37.155 lat (msec): min=173, max=8117, avg=2342.11, stdev=1368.74 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 363], 5.00th=[ 760], 10.00th=[ 927], 20.00th=[ 978], 00:25:37.155 | 30.00th=[ 1116], 40.00th=[ 1183], 50.00th=[ 1569], 60.00th=[ 3507], 00:25:37.155 | 70.00th=[ 3608], 80.00th=[ 3742], 90.00th=[ 3842], 95.00th=[ 3910], 00:25:37.155 | 99.00th=[ 3977], 99.50th=[ 3977], 99.90th=[ 8087], 99.95th=[ 8087], 00:25:37.155 | 99.99th=[ 8087] 00:25:37.155 bw ( KiB/s): min= 6144, max=135168, per=1.68%, avg=68415.33, stdev=47609.88, samples=12 00:25:37.155 iops : min= 6, max= 132, avg=66.58, stdev=46.62, samples=12 00:25:37.155 lat (msec) : 250=0.57%, 500=1.33%, 750=2.65%, 1000=18.37%, 2000=29.36% 00:25:37.155 lat (msec) : >=2000=47.73% 00:25:37.155 cpu : usr=0.03%, sys=1.51%, ctx=1094, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.1% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.155 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387756: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=38, BW=38.8MiB/s (40.6MB/s)(392MiB/10113msec) 00:25:37.155 slat (usec): min=37, max=2049.6k, avg=25508.96, stdev=172174.48 00:25:37.155 clat (msec): min=110, max=9878, avg=3116.00, stdev=2838.12 00:25:37.155 lat (msec): min=156, max=9939, avg=3141.51, stdev=2844.66 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 239], 5.00th=[ 743], 10.00th=[ 776], 20.00th=[ 860], 00:25:37.155 | 30.00th=[ 902], 40.00th=[ 936], 50.00th=[ 1045], 60.00th=[ 2802], 00:25:37.155 | 70.00th=[ 4866], 80.00th=[ 6745], 90.00th=[ 8020], 95.00th=[ 8154], 00:25:37.155 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 9866], 99.95th=[ 9866], 00:25:37.155 | 99.99th=[ 9866] 00:25:37.155 bw ( KiB/s): min= 6131, max=197002, per=1.21%, avg=49369.45, stdev=60827.41, samples=11 00:25:37.155 iops : min= 5, max= 192, avg=48.00, stdev=59.43, samples=11 00:25:37.155 lat (msec) : 250=1.02%, 500=1.53%, 750=4.08%, 1000=39.03%, 2000=10.97% 00:25:37.155 lat (msec) : >=2000=43.37% 00:25:37.155 cpu : usr=0.01%, sys=1.25%, ctx=693, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.155 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387757: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=31, BW=31.9MiB/s (33.4MB/s)(322MiB/10106msec) 00:25:37.155 slat (usec): min=36, max=1985.3k, avg=31079.04, stdev=149342.42 00:25:37.155 clat (msec): min=96, max=8169, avg=3606.34, stdev=2189.78 00:25:37.155 lat (msec): min=107, max=8225, avg=3637.42, stdev=2191.26 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 161], 5.00th=[ 592], 10.00th=[ 936], 20.00th=[ 1737], 00:25:37.155 | 30.00th=[ 2005], 40.00th=[ 2467], 50.00th=[ 2635], 60.00th=[ 4396], 00:25:37.155 | 70.00th=[ 5940], 80.00th=[ 6275], 90.00th=[ 6342], 95.00th=[ 6409], 00:25:37.155 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 8154], 99.95th=[ 8154], 00:25:37.155 | 99.99th=[ 8154] 00:25:37.155 bw ( KiB/s): min= 4096, max=75624, per=0.89%, avg=36282.64, stdev=19587.29, samples=11 00:25:37.155 iops : min= 4, max= 73, avg=35.27, stdev=18.90, samples=11 00:25:37.155 lat (msec) : 100=0.31%, 250=2.17%, 500=1.86%, 750=4.04%, 1000=2.17% 00:25:37.155 lat (msec) : 2000=19.25%, >=2000=70.19% 00:25:37.155 cpu : usr=0.01%, sys=1.16%, ctx=1018, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.4% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:37.155 issued rwts: total=322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387758: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=48, BW=48.6MiB/s (51.0MB/s)(491MiB/10100msec) 00:25:37.155 slat (usec): min=36, max=1838.2k, avg=20363.82, stdev=116425.83 00:25:37.155 clat (msec): min=97, max=4830, avg=1822.35, stdev=1203.73 00:25:37.155 lat (msec): min=105, max=4842, avg=1842.71, stdev=1211.03 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 163], 5.00th=[ 751], 10.00th=[ 877], 20.00th=[ 877], 00:25:37.155 | 30.00th=[ 885], 40.00th=[ 961], 50.00th=[ 1217], 60.00th=[ 1569], 00:25:37.155 | 70.00th=[ 2467], 80.00th=[ 3239], 90.00th=[ 3809], 95.00th=[ 3943], 00:25:37.155 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:37.155 | 99.99th=[ 4799] 00:25:37.155 bw ( KiB/s): min=18432, max=155648, per=2.03%, avg=82830.22, stdev=58521.50, samples=9 00:25:37.155 iops : min= 18, max= 152, avg=80.89, stdev=57.15, samples=9 00:25:37.155 lat (msec) : 100=0.20%, 250=2.04%, 500=1.43%, 750=1.22%, 1000=40.12% 00:25:37.155 lat (msec) : 2000=20.37%, >=2000=34.62% 00:25:37.155 cpu : usr=0.03%, sys=1.17%, ctx=873, majf=0, minf=32769 00:25:37.155 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.2% 00:25:37.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.155 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:37.155 issued rwts: total=491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.155 job4: (groupid=0, jobs=1): err= 0: pid=387759: Sun Nov 17 04:42:13 2024 00:25:37.155 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(356MiB/10114msec) 00:25:37.155 slat (usec): min=134, max=2080.8k, avg=28150.16, stdev=152755.87 00:25:37.155 clat (msec): min=90, max=7059, avg=3316.21, stdev=2419.95 00:25:37.155 lat (msec): min=134, max=7134, avg=3344.36, stdev=2422.63 00:25:37.155 clat percentiles (msec): 00:25:37.155 | 1.00th=[ 207], 5.00th=[ 426], 10.00th=[ 1003], 20.00th=[ 1469], 00:25:37.156 | 30.00th=[ 1569], 40.00th=[ 1720], 50.00th=[ 1821], 60.00th=[ 2232], 00:25:37.156 | 70.00th=[ 6208], 80.00th=[ 6477], 90.00th=[ 6812], 95.00th=[ 6946], 00:25:37.156 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:25:37.156 | 99.99th=[ 7080] 00:25:37.156 bw ( KiB/s): min=12288, max=88064, per=1.04%, avg=42449.45, stdev=25184.42, samples=11 00:25:37.156 iops : min= 12, max= 86, avg=41.45, stdev=24.59, samples=11 00:25:37.156 lat (msec) : 100=0.28%, 250=1.97%, 500=3.09%, 750=2.53%, 1000=1.97% 00:25:37.156 lat (msec) : 2000=44.10%, >=2000=46.07% 00:25:37.156 cpu : usr=0.03%, sys=1.22%, ctx=984, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.3% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.156 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job4: (groupid=0, jobs=1): err= 0: pid=387760: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(268MiB/10095msec) 00:25:37.156 slat (usec): min=38, max=2131.5k, avg=37336.43, stdev=216348.64 00:25:37.156 clat (msec): min=87, max=8747, avg=4518.84, stdev=3340.23 00:25:37.156 lat (msec): min=136, max=8750, avg=4556.17, stdev=3339.69 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 209], 5.00th=[ 735], 10.00th=[ 1116], 20.00th=[ 1234], 00:25:37.156 | 30.00th=[ 1301], 40.00th=[ 1603], 50.00th=[ 3775], 60.00th=[ 7550], 00:25:37.156 | 70.00th=[ 7886], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8658], 00:25:37.156 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:25:37.156 | 99.99th=[ 8792] 00:25:37.156 bw ( KiB/s): min= 2048, max=69632, per=0.79%, avg=32085.33, stdev=24447.66, samples=9 00:25:37.156 iops : min= 2, max= 68, avg=31.33, stdev=23.87, samples=9 00:25:37.156 lat (msec) : 100=0.37%, 250=1.12%, 500=1.87%, 750=1.87%, 1000=1.87% 00:25:37.156 lat (msec) : 2000=41.04%, >=2000=51.87% 00:25:37.156 cpu : usr=0.00%, sys=0.89%, ctx=684, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=6.0%, 32=11.9%, >=64=76.5% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:37.156 issued rwts: total=268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job4: (groupid=0, jobs=1): err= 0: pid=387761: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=53, BW=53.5MiB/s (56.1MB/s)(538MiB/10055msec) 00:25:37.156 slat (usec): min=50, max=2068.3k, avg=18595.05, stdev=126189.68 00:25:37.156 clat (msec): min=47, max=5826, avg=2232.59, stdev=1937.02 00:25:37.156 lat (msec): min=56, max=5828, avg=2251.18, stdev=1940.13 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 121], 5.00th=[ 701], 10.00th=[ 726], 20.00th=[ 776], 00:25:37.156 | 30.00th=[ 961], 40.00th=[ 1036], 50.00th=[ 1150], 60.00th=[ 1838], 00:25:37.156 | 70.00th=[ 2106], 80.00th=[ 5537], 90.00th=[ 5671], 95.00th=[ 5738], 00:25:37.156 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:37.156 | 99.99th=[ 5805] 00:25:37.156 bw ( KiB/s): min= 4096, max=184320, per=1.82%, avg=73894.82, stdev=61520.54, samples=11 00:25:37.156 iops : min= 4, max= 180, avg=72.09, stdev=60.04, samples=11 00:25:37.156 lat (msec) : 50=0.19%, 100=0.56%, 250=0.74%, 500=0.93%, 750=12.64% 00:25:37.156 lat (msec) : 1000=21.19%, 2000=29.74%, >=2000=34.01% 00:25:37.156 cpu : usr=0.05%, sys=1.00%, ctx=1001, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.156 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job5: (groupid=0, jobs=1): err= 0: pid=387762: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=67, BW=67.6MiB/s (70.9MB/s)(680MiB/10063msec) 00:25:37.156 slat (usec): min=102, max=1721.9k, avg=14703.61, stdev=71983.48 00:25:37.156 clat (msec): min=61, max=3600, avg=1410.13, stdev=875.04 00:25:37.156 lat (msec): min=65, max=3602, avg=1424.83, stdev=879.90 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 213], 5.00th=[ 355], 10.00th=[ 376], 20.00th=[ 693], 00:25:37.156 | 30.00th=[ 743], 40.00th=[ 1003], 50.00th=[ 1234], 60.00th=[ 1519], 00:25:37.156 | 70.00th=[ 1754], 80.00th=[ 1989], 90.00th=[ 2903], 95.00th=[ 3239], 00:25:37.156 | 99.00th=[ 3507], 99.50th=[ 3574], 99.90th=[ 3608], 99.95th=[ 3608], 00:25:37.156 | 99.99th=[ 3608] 00:25:37.156 bw ( KiB/s): min=18468, max=325632, per=1.99%, avg=80900.57, stdev=80949.97, samples=14 00:25:37.156 iops : min= 18, max= 318, avg=78.93, stdev=79.11, samples=14 00:25:37.156 lat (msec) : 100=0.59%, 250=0.44%, 500=13.97%, 750=15.74%, 1000=8.68% 00:25:37.156 lat (msec) : 2000=40.88%, >=2000=19.71% 00:25:37.156 cpu : usr=0.03%, sys=1.09%, ctx=2240, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.156 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job5: (groupid=0, jobs=1): err= 0: pid=387763: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=54, BW=54.6MiB/s (57.3MB/s)(550MiB/10072msec) 00:25:37.156 slat (usec): min=42, max=1815.1k, avg=18194.99, stdev=108474.57 00:25:37.156 clat (msec): min=62, max=5141, avg=1503.68, stdev=1040.37 00:25:37.156 lat (msec): min=72, max=5188, avg=1521.87, stdev=1051.35 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 144], 5.00th=[ 317], 10.00th=[ 326], 20.00th=[ 527], 00:25:37.156 | 30.00th=[ 936], 40.00th=[ 1250], 50.00th=[ 1334], 60.00th=[ 1502], 00:25:37.156 | 70.00th=[ 1703], 80.00th=[ 2265], 90.00th=[ 2869], 95.00th=[ 3205], 00:25:37.156 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:25:37.156 | 99.99th=[ 5134] 00:25:37.156 bw ( KiB/s): min=32768, max=282624, per=2.13%, avg=86609.10, stdev=79991.79, samples=10 00:25:37.156 iops : min= 32, max= 276, avg=84.50, stdev=78.10, samples=10 00:25:37.156 lat (msec) : 100=0.36%, 250=1.64%, 500=17.45%, 750=5.64%, 1000=6.73% 00:25:37.156 lat (msec) : 2000=45.09%, >=2000=23.09% 00:25:37.156 cpu : usr=0.05%, sys=1.12%, ctx=1528, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.156 issued rwts: total=550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job5: (groupid=0, jobs=1): err= 0: pid=387764: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=174, BW=175MiB/s (183MB/s)(1767MiB/10109msec) 00:25:37.156 slat (usec): min=58, max=1485.5k, avg=5668.93, stdev=37793.20 00:25:37.156 clat (msec): min=83, max=2911, avg=701.65, stdev=463.52 00:25:37.156 lat (msec): min=156, max=2916, avg=707.32, stdev=465.60 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 222], 5.00th=[ 259], 10.00th=[ 342], 20.00th=[ 493], 00:25:37.156 | 30.00th=[ 535], 40.00th=[ 558], 50.00th=[ 600], 60.00th=[ 625], 00:25:37.156 | 70.00th=[ 701], 80.00th=[ 776], 90.00th=[ 844], 95.00th=[ 2165], 00:25:37.156 | 99.00th=[ 2232], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:25:37.156 | 99.99th=[ 2903] 00:25:37.156 bw ( KiB/s): min=88064, max=450560, per=4.85%, avg=197451.29, stdev=80867.02, samples=17 00:25:37.156 iops : min= 86, max= 440, avg=192.82, stdev=78.97, samples=17 00:25:37.156 lat (msec) : 100=0.06%, 250=4.30%, 500=19.75%, 750=51.90%, 1000=16.81% 00:25:37.156 lat (msec) : >=2000=7.19% 00:25:37.156 cpu : usr=0.09%, sys=2.76%, ctx=1375, majf=0, minf=32769 00:25:37.156 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:25:37.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.156 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.156 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.156 job5: (groupid=0, jobs=1): err= 0: pid=387765: Sun Nov 17 04:42:13 2024 00:25:37.156 read: IOPS=70, BW=70.3MiB/s (73.7MB/s)(707MiB/10062msec) 00:25:37.156 slat (usec): min=32, max=2014.3k, avg=14140.33, stdev=102545.91 00:25:37.156 clat (msec): min=60, max=5371, avg=988.25, stdev=651.15 00:25:37.156 lat (msec): min=62, max=5372, avg=1002.39, stdev=672.02 00:25:37.156 clat percentiles (msec): 00:25:37.156 | 1.00th=[ 129], 5.00th=[ 296], 10.00th=[ 531], 20.00th=[ 617], 00:25:37.156 | 30.00th=[ 651], 40.00th=[ 776], 50.00th=[ 911], 60.00th=[ 961], 00:25:37.156 | 70.00th=[ 1028], 80.00th=[ 1250], 90.00th=[ 1653], 95.00th=[ 1687], 00:25:37.156 | 99.00th=[ 5269], 99.50th=[ 5336], 99.90th=[ 5403], 99.95th=[ 5403], 00:25:37.156 | 99.99th=[ 5403] 00:25:37.156 bw ( KiB/s): min=49152, max=210944, per=2.92%, avg=118772.90, stdev=53376.34, samples=10 00:25:37.156 iops : min= 48, max= 206, avg=115.90, stdev=52.24, samples=10 00:25:37.156 lat (msec) : 100=0.85%, 250=3.11%, 500=4.81%, 750=28.71%, 1000=30.41% 00:25:37.156 lat (msec) : 2000=30.13%, >=2000=1.98% 00:25:37.157 cpu : usr=0.07%, sys=1.40%, ctx=981, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.157 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387766: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=37, BW=37.5MiB/s (39.3MB/s)(376MiB/10037msec) 00:25:37.157 slat (usec): min=408, max=2012.7k, avg=26615.60, stdev=136328.62 00:25:37.157 clat (msec): min=27, max=4900, avg=2072.63, stdev=1017.78 00:25:37.157 lat (msec): min=40, max=4907, avg=2099.25, stdev=1026.43 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 69], 5.00th=[ 493], 10.00th=[ 969], 20.00th=[ 1099], 00:25:37.157 | 30.00th=[ 1200], 40.00th=[ 1670], 50.00th=[ 2106], 60.00th=[ 2400], 00:25:37.157 | 70.00th=[ 2769], 80.00th=[ 3104], 90.00th=[ 3406], 95.00th=[ 3540], 00:25:37.157 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:25:37.157 | 99.99th=[ 4933] 00:25:37.157 bw ( KiB/s): min=28672, max=94208, per=1.15%, avg=46891.00, stdev=19249.91, samples=10 00:25:37.157 iops : min= 28, max= 92, avg=45.70, stdev=18.83, samples=10 00:25:37.157 lat (msec) : 50=0.80%, 100=0.53%, 250=1.33%, 500=2.39%, 750=2.66% 00:25:37.157 lat (msec) : 1000=4.26%, 2000=35.37%, >=2000=52.66% 00:25:37.157 cpu : usr=0.05%, sys=0.88%, ctx=1366, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:37.157 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387767: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=70, BW=70.7MiB/s (74.1MB/s)(713MiB/10086msec) 00:25:37.157 slat (usec): min=51, max=2000.8k, avg=14045.48, stdev=99820.88 00:25:37.157 clat (msec): min=67, max=4690, avg=1091.77, stdev=740.82 00:25:37.157 lat (msec): min=96, max=4692, avg=1105.82, stdev=751.89 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 174], 5.00th=[ 313], 10.00th=[ 372], 20.00th=[ 401], 00:25:37.157 | 30.00th=[ 575], 40.00th=[ 793], 50.00th=[ 844], 60.00th=[ 1011], 00:25:37.157 | 70.00th=[ 1519], 80.00th=[ 1888], 90.00th=[ 1972], 95.00th=[ 2005], 00:25:37.157 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:37.157 | 99.99th=[ 4665] 00:25:37.157 bw ( KiB/s): min=28672, max=339968, per=2.68%, avg=108916.36, stdev=98771.45, samples=11 00:25:37.157 iops : min= 28, max= 332, avg=106.36, stdev=96.46, samples=11 00:25:37.157 lat (msec) : 100=0.28%, 250=0.98%, 500=23.28%, 750=14.17%, 1000=20.76% 00:25:37.157 lat (msec) : 2000=36.47%, >=2000=4.07% 00:25:37.157 cpu : usr=0.03%, sys=1.14%, ctx=2258, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.157 issued rwts: total=713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387768: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=123, BW=123MiB/s (129MB/s)(1237MiB/10044msec) 00:25:37.157 slat (usec): min=41, max=1697.2k, avg=8081.32, stdev=50128.87 00:25:37.157 clat (msec): min=40, max=2875, avg=995.61, stdev=707.09 00:25:37.157 lat (msec): min=46, max=2942, avg=1003.69, stdev=709.27 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 279], 5.00th=[ 414], 10.00th=[ 518], 20.00th=[ 527], 00:25:37.157 | 30.00th=[ 535], 40.00th=[ 567], 50.00th=[ 584], 60.00th=[ 634], 00:25:37.157 | 70.00th=[ 1183], 80.00th=[ 1703], 90.00th=[ 2232], 95.00th=[ 2400], 00:25:37.157 | 99.00th=[ 2802], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:37.157 | 99.99th=[ 2869] 00:25:37.157 bw ( KiB/s): min=10219, max=256000, per=3.28%, avg=133720.82, stdev=90270.28, samples=17 00:25:37.157 iops : min= 9, max= 250, avg=130.47, stdev=88.30, samples=17 00:25:37.157 lat (msec) : 50=0.16%, 100=0.24%, 250=0.49%, 500=6.71%, 750=57.07% 00:25:37.157 lat (msec) : 1000=1.54%, 2000=16.65%, >=2000=17.14% 00:25:37.157 cpu : usr=0.07%, sys=2.07%, ctx=1638, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.157 issued rwts: total=1237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387769: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=64, BW=64.9MiB/s (68.1MB/s)(653MiB/10057msec) 00:25:37.157 slat (usec): min=40, max=1753.3k, avg=15329.17, stdev=71835.07 00:25:37.157 clat (msec): min=43, max=3957, avg=1819.78, stdev=987.53 00:25:37.157 lat (msec): min=57, max=3957, avg=1835.11, stdev=988.29 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 203], 5.00th=[ 877], 10.00th=[ 877], 20.00th=[ 885], 00:25:37.157 | 30.00th=[ 969], 40.00th=[ 1116], 50.00th=[ 1318], 60.00th=[ 2106], 00:25:37.157 | 70.00th=[ 2802], 80.00th=[ 3004], 90.00th=[ 3071], 95.00th=[ 3239], 00:25:37.157 | 99.00th=[ 3842], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:37.157 | 99.99th=[ 3943] 00:25:37.157 bw ( KiB/s): min=14336, max=155648, per=1.65%, avg=67167.69, stdev=48502.35, samples=16 00:25:37.157 iops : min= 14, max= 152, avg=65.44, stdev=47.32, samples=16 00:25:37.157 lat (msec) : 50=0.15%, 100=0.31%, 250=1.07%, 500=0.92%, 750=1.99% 00:25:37.157 lat (msec) : 1000=27.41%, 2000=25.27%, >=2000=42.88% 00:25:37.157 cpu : usr=0.09%, sys=1.30%, ctx=1352, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.157 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387770: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=89, BW=89.9MiB/s (94.2MB/s)(905MiB/10071msec) 00:25:37.157 slat (usec): min=37, max=2004.7k, avg=11066.77, stdev=90036.50 00:25:37.157 clat (msec): min=51, max=4950, avg=792.08, stdev=554.50 00:25:37.157 lat (msec): min=88, max=5019, avg=803.15, stdev=572.43 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 239], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:25:37.157 | 30.00th=[ 376], 40.00th=[ 493], 50.00th=[ 542], 60.00th=[ 709], 00:25:37.157 | 70.00th=[ 911], 80.00th=[ 1368], 90.00th=[ 1586], 95.00th=[ 1955], 00:25:37.157 | 99.00th=[ 2022], 99.50th=[ 3037], 99.90th=[ 4933], 99.95th=[ 4933], 00:25:37.157 | 99.99th=[ 4933] 00:25:37.157 bw ( KiB/s): min=32768, max=389120, per=4.23%, avg=172222.67, stdev=137521.18, samples=9 00:25:37.157 iops : min= 32, max= 380, avg=168.11, stdev=134.30, samples=9 00:25:37.157 lat (msec) : 100=0.22%, 250=1.10%, 500=39.12%, 750=22.76%, 1000=8.95% 00:25:37.157 lat (msec) : 2000=25.19%, >=2000=2.65% 00:25:37.157 cpu : usr=0.01%, sys=1.23%, ctx=2740, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.157 issued rwts: total=905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387771: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=59, BW=59.4MiB/s (62.3MB/s)(601MiB/10122msec) 00:25:37.157 slat (usec): min=48, max=2080.5k, avg=16643.94, stdev=111177.23 00:25:37.157 clat (msec): min=116, max=4683, avg=1368.71, stdev=983.45 00:25:37.157 lat (msec): min=138, max=4686, avg=1385.35, stdev=992.78 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 284], 5.00th=[ 409], 10.00th=[ 418], 20.00th=[ 464], 00:25:37.157 | 30.00th=[ 617], 40.00th=[ 768], 50.00th=[ 1167], 60.00th=[ 1519], 00:25:37.157 | 70.00th=[ 1670], 80.00th=[ 2232], 90.00th=[ 2802], 95.00th=[ 2869], 00:25:37.157 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:37.157 | 99.99th=[ 4665] 00:25:37.157 bw ( KiB/s): min=12288, max=272384, per=2.17%, avg=88250.18, stdev=84279.39, samples=11 00:25:37.157 iops : min= 12, max= 266, avg=86.18, stdev=82.30, samples=11 00:25:37.157 lat (msec) : 250=0.67%, 500=21.96%, 750=16.97%, 1000=6.82%, 2000=31.45% 00:25:37.157 lat (msec) : >=2000=22.13% 00:25:37.157 cpu : usr=0.04%, sys=1.06%, ctx=1980, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.157 issued rwts: total=601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387772: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=116, BW=117MiB/s (122MB/s)(1176MiB/10069msec) 00:25:37.157 slat (usec): min=32, max=1998.9k, avg=8506.86, stdev=59351.92 00:25:37.157 clat (msec): min=59, max=3658, avg=1017.62, stdev=978.84 00:25:37.157 lat (msec): min=70, max=3666, avg=1026.13, stdev=983.07 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 218], 20.00th=[ 313], 00:25:37.157 | 30.00th=[ 359], 40.00th=[ 567], 50.00th=[ 667], 60.00th=[ 810], 00:25:37.157 | 70.00th=[ 1028], 80.00th=[ 1418], 90.00th=[ 3239], 95.00th=[ 3473], 00:25:37.157 | 99.00th=[ 3641], 99.50th=[ 3641], 99.90th=[ 3641], 99.95th=[ 3675], 00:25:37.157 | 99.99th=[ 3675] 00:25:37.157 bw ( KiB/s): min= 4096, max=372736, per=3.52%, avg=143211.47, stdev=121572.93, samples=15 00:25:37.157 iops : min= 4, max= 364, avg=139.80, stdev=118.75, samples=15 00:25:37.157 lat (msec) : 100=0.26%, 250=13.18%, 500=24.49%, 750=19.05%, 1000=12.84% 00:25:37.157 lat (msec) : 2000=16.24%, >=2000=13.95% 00:25:37.157 cpu : usr=0.04%, sys=1.75%, ctx=2803, majf=0, minf=32769 00:25:37.157 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:25:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.157 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.157 issued rwts: total=1176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.157 job5: (groupid=0, jobs=1): err= 0: pid=387773: Sun Nov 17 04:42:13 2024 00:25:37.157 read: IOPS=62, BW=62.9MiB/s (66.0MB/s)(637MiB/10121msec) 00:25:37.157 slat (usec): min=408, max=1750.0k, avg=15711.77, stdev=78140.87 00:25:37.157 clat (msec): min=109, max=4459, avg=1643.36, stdev=946.66 00:25:37.157 lat (msec): min=146, max=4517, avg=1659.07, stdev=951.92 00:25:37.157 clat percentiles (msec): 00:25:37.157 | 1.00th=[ 313], 5.00th=[ 401], 10.00th=[ 426], 20.00th=[ 701], 00:25:37.157 | 30.00th=[ 1062], 40.00th=[ 1469], 50.00th=[ 1737], 60.00th=[ 1821], 00:25:37.157 | 70.00th=[ 2022], 80.00th=[ 2299], 90.00th=[ 2567], 95.00th=[ 4111], 00:25:37.157 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:37.157 | 99.99th=[ 4463] 00:25:37.157 bw ( KiB/s): min= 8208, max=313971, per=1.97%, avg=80288.69, stdev=81448.40, samples=13 00:25:37.158 iops : min= 8, max= 306, avg=78.31, stdev=79.38, samples=13 00:25:37.158 lat (msec) : 250=0.63%, 500=13.66%, 750=9.42%, 1000=4.40%, 2000=41.29% 00:25:37.158 lat (msec) : >=2000=30.61% 00:25:37.158 cpu : usr=0.01%, sys=1.39%, ctx=2202, majf=0, minf=32769 00:25:37.158 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:25:37.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.158 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:37.158 issued rwts: total=637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.158 job5: (groupid=0, jobs=1): err= 0: pid=387774: Sun Nov 17 04:42:13 2024 00:25:37.158 read: IOPS=132, BW=132MiB/s (138MB/s)(1334MiB/10101msec) 00:25:37.158 slat (usec): min=36, max=1721.9k, avg=7527.22, stdev=48492.39 00:25:37.158 clat (msec): min=54, max=2875, avg=746.93, stdev=463.91 00:25:37.158 lat (msec): min=104, max=2877, avg=754.45, stdev=468.99 00:25:37.158 clat percentiles (msec): 00:25:37.158 | 1.00th=[ 236], 5.00th=[ 264], 10.00th=[ 268], 20.00th=[ 347], 00:25:37.158 | 30.00th=[ 414], 40.00th=[ 460], 50.00th=[ 642], 60.00th=[ 810], 00:25:37.158 | 70.00th=[ 953], 80.00th=[ 1070], 90.00th=[ 1301], 95.00th=[ 1737], 00:25:37.158 | 99.00th=[ 1989], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:37.158 | 99.99th=[ 2869] 00:25:37.158 bw ( KiB/s): min=51200, max=477184, per=4.04%, avg=164659.20, stdev=131160.66, samples=15 00:25:37.158 iops : min= 50, max= 466, avg=160.80, stdev=128.09, samples=15 00:25:37.158 lat (msec) : 100=0.07%, 250=1.20%, 500=40.93%, 750=14.39%, 1000=18.37% 00:25:37.158 lat (msec) : 2000=24.21%, >=2000=0.82% 00:25:37.158 cpu : usr=0.05%, sys=1.57%, ctx=2796, majf=0, minf=32769 00:25:37.158 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:25:37.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.158 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.158 issued rwts: total=1334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.158 00:25:37.158 Run status group 0 (all jobs): 00:25:37.158 READ: bw=3975MiB/s (4168MB/s), 7691KiB/s-200MiB/s (7875kB/s-210MB/s), io=39.5GiB (42.4GB), run=10031-10162msec 00:25:37.158 00:25:37.158 Disk stats (read/write): 00:25:37.158 nvme0n1: ios=47679/0, merge=0/0, ticks=6064349/0, in_queue=6064349, util=97.99% 00:25:37.158 nvme1n1: ios=17795/0, merge=0/0, ticks=4621154/0, in_queue=4621154, util=98.30% 00:25:37.158 nvme2n1: ios=66282/0, merge=0/0, ticks=7517696/0, in_queue=7517696, util=98.57% 00:25:37.158 nvme3n1: ios=47926/0, merge=0/0, ticks=6225112/0, in_queue=6225112, util=98.63% 00:25:37.158 nvme4n1: ios=47913/0, merge=0/0, ticks=6555055/0, in_queue=6555055, util=98.97% 00:25:37.158 nvme5n1: ios=89444/0, merge=0/0, ticks=6701541/0, in_queue=6701541, util=98.60% 00:25:37.158 04:42:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:37.158 04:42:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:37.158 04:42:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:37.158 04:42:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:37.158 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:37.158 04:42:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:37.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:25:37.726 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:37.985 04:42:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:38.923 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:38.923 04:42:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:39.860 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:39.860 04:42:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:40.797 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:40.797 04:42:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:41.734 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.734 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:41.993 rmmod nvme_rdma 00:25:41.993 rmmod nvme_fabrics 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 386232 ']' 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 386232 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 386232 ']' 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 386232 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386232 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386232' 00:25:41.993 killing process with pid 386232 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 386232 00:25:41.993 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 386232 00:25:42.255 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.255 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:42.255 00:25:42.255 real 0m32.233s 00:25:42.255 user 1m48.359s 00:25:42.255 sys 0m17.631s 00:25:42.255 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.255 04:42:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:42.255 ************************************ 00:25:42.255 END TEST nvmf_srq_overwhelm 00:25:42.255 ************************************ 00:25:42.255 04:42:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:42.255 04:42:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.255 04:42:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.255 04:42:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.255 ************************************ 00:25:42.255 START TEST nvmf_shutdown 00:25:42.255 ************************************ 00:25:42.255 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:42.516 * Looking for test storage... 00:25:42.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.516 --rc genhtml_branch_coverage=1 00:25:42.516 --rc genhtml_function_coverage=1 00:25:42.516 --rc genhtml_legend=1 00:25:42.516 --rc geninfo_all_blocks=1 00:25:42.516 --rc geninfo_unexecuted_blocks=1 00:25:42.516 00:25:42.516 ' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.516 --rc genhtml_branch_coverage=1 00:25:42.516 --rc genhtml_function_coverage=1 00:25:42.516 --rc genhtml_legend=1 00:25:42.516 --rc geninfo_all_blocks=1 00:25:42.516 --rc geninfo_unexecuted_blocks=1 00:25:42.516 00:25:42.516 ' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.516 --rc genhtml_branch_coverage=1 00:25:42.516 --rc genhtml_function_coverage=1 00:25:42.516 --rc genhtml_legend=1 00:25:42.516 --rc geninfo_all_blocks=1 00:25:42.516 --rc geninfo_unexecuted_blocks=1 00:25:42.516 00:25:42.516 ' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.516 --rc genhtml_branch_coverage=1 00:25:42.516 --rc genhtml_function_coverage=1 00:25:42.516 --rc genhtml_legend=1 00:25:42.516 --rc geninfo_all_blocks=1 00:25:42.516 --rc geninfo_unexecuted_blocks=1 00:25:42.516 00:25:42.516 ' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.516 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.517 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:42.776 ************************************ 00:25:42.776 START TEST nvmf_shutdown_tc1 00:25:42.776 ************************************ 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.776 04:42:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:50.903 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:50.903 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:50.903 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:50.903 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:50.903 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:50.904 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:50.904 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:50.904 altname enp217s0f0np0 00:25:50.904 altname ens818f0np0 00:25:50.904 inet 192.168.100.8/24 scope global mlx_0_0 00:25:50.904 valid_lft forever preferred_lft forever 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:50.904 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:50.904 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:50.904 altname enp217s0f1np1 00:25:50.904 altname ens818f1np1 00:25:50.904 inet 192.168.100.9/24 scope global mlx_0_1 00:25:50.904 valid_lft forever preferred_lft forever 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:50.904 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:50.905 192.168.100.9' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:50.905 192.168.100.9' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:50.905 192.168.100.9' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=394230 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 394230 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 394230 ']' 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 [2024-11-17 04:42:28.669079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:50.905 [2024-11-17 04:42:28.669131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.905 [2024-11-17 04:42:28.759154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.905 [2024-11-17 04:42:28.781941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.905 [2024-11-17 04:42:28.781973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.905 [2024-11-17 04:42:28.781982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.905 [2024-11-17 04:42:28.781991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.905 [2024-11-17 04:42:28.781998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.905 [2024-11-17 04:42:28.783640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.905 [2024-11-17 04:42:28.783748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.905 [2024-11-17 04:42:28.783855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.905 [2024-11-17 04:42:28.783856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.905 04:42:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 [2024-11-17 04:42:28.950774] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10fcf50/0x1101400) succeed. 00:25:50.905 [2024-11-17 04:42:28.959927] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10fe590/0x1142aa0) succeed. 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 Malloc1 00:25:50.905 [2024-11-17 04:42:29.198821] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:50.905 Malloc2 00:25:50.905 Malloc3 00:25:50.905 Malloc4 00:25:50.905 Malloc5 00:25:50.905 Malloc6 00:25:50.905 Malloc7 00:25:50.905 Malloc8 00:25:50.905 Malloc9 00:25:50.905 Malloc10 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.905 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=394535 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 394535 /var/tmp/bdevperf.sock 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 394535 ']' 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.906 [2024-11-17 04:42:29.696710] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:50.906 [2024-11-17 04:42:29.696762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.906 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.906 { 00:25:50.906 "params": { 00:25:50.906 "name": "Nvme$subsystem", 00:25:50.906 "trtype": "$TEST_TRANSPORT", 00:25:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.906 "adrfam": "ipv4", 00:25:50.906 "trsvcid": "$NVMF_PORT", 00:25:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.906 "hdgst": ${hdgst:-false}, 00:25:50.906 "ddgst": ${ddgst:-false} 00:25:50.906 }, 00:25:50.906 "method": "bdev_nvme_attach_controller" 00:25:50.906 } 00:25:50.906 EOF 00:25:50.906 )") 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.907 { 00:25:50.907 "params": { 00:25:50.907 "name": "Nvme$subsystem", 00:25:50.907 "trtype": "$TEST_TRANSPORT", 00:25:50.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.907 "adrfam": "ipv4", 00:25:50.907 "trsvcid": "$NVMF_PORT", 00:25:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.907 "hdgst": ${hdgst:-false}, 00:25:50.907 "ddgst": ${ddgst:-false} 00:25:50.907 }, 00:25:50.907 "method": "bdev_nvme_attach_controller" 00:25:50.907 } 00:25:50.907 EOF 00:25:50.907 )") 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.907 { 00:25:50.907 "params": { 00:25:50.907 "name": "Nvme$subsystem", 00:25:50.907 "trtype": "$TEST_TRANSPORT", 00:25:50.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.907 "adrfam": "ipv4", 00:25:50.907 "trsvcid": "$NVMF_PORT", 00:25:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.907 "hdgst": ${hdgst:-false}, 00:25:50.907 "ddgst": ${ddgst:-false} 00:25:50.907 }, 00:25:50.907 "method": "bdev_nvme_attach_controller" 00:25:50.907 } 00:25:50.907 EOF 00:25:50.907 )") 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.907 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.907 { 00:25:50.907 "params": { 00:25:50.907 "name": "Nvme$subsystem", 00:25:50.907 "trtype": "$TEST_TRANSPORT", 00:25:50.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.907 "adrfam": "ipv4", 00:25:50.907 "trsvcid": "$NVMF_PORT", 00:25:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.907 "hdgst": ${hdgst:-false}, 00:25:50.907 "ddgst": ${ddgst:-false} 00:25:50.907 }, 00:25:50.907 "method": "bdev_nvme_attach_controller" 00:25:50.907 } 00:25:50.907 EOF 00:25:50.907 )") 00:25:51.167 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:51.167 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:51.167 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:51.167 04:42:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme1", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme2", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme3", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme4", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme5", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme6", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme7", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme8", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme9", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 },{ 00:25:51.167 "params": { 00:25:51.167 "name": "Nvme10", 00:25:51.167 "trtype": "rdma", 00:25:51.167 "traddr": "192.168.100.8", 00:25:51.167 "adrfam": "ipv4", 00:25:51.167 "trsvcid": "4420", 00:25:51.167 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:51.167 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:51.167 "hdgst": false, 00:25:51.167 "ddgst": false 00:25:51.167 }, 00:25:51.167 "method": "bdev_nvme_attach_controller" 00:25:51.167 }' 00:25:51.167 [2024-11-17 04:42:29.789871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.167 [2024-11-17 04:42:29.812208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 394535 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:52.104 04:42:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:53.044 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 394535 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 394230 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 [2024-11-17 04:42:31.729377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:53.044 [2024-11-17 04:42:31.729432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394837 ] 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.044 { 00:25:53.044 "params": { 00:25:53.044 "name": "Nvme$subsystem", 00:25:53.044 "trtype": "$TEST_TRANSPORT", 00:25:53.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.044 "adrfam": "ipv4", 00:25:53.044 "trsvcid": "$NVMF_PORT", 00:25:53.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.044 "hdgst": ${hdgst:-false}, 00:25:53.044 "ddgst": ${ddgst:-false} 00:25:53.044 }, 00:25:53.044 "method": "bdev_nvme_attach_controller" 00:25:53.044 } 00:25:53.044 EOF 00:25:53.044 )") 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.044 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.045 { 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme$subsystem", 00:25:53.045 "trtype": "$TEST_TRANSPORT", 00:25:53.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "$NVMF_PORT", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.045 "hdgst": ${hdgst:-false}, 00:25:53.045 "ddgst": ${ddgst:-false} 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 } 00:25:53.045 EOF 00:25:53.045 )") 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.045 { 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme$subsystem", 00:25:53.045 "trtype": "$TEST_TRANSPORT", 00:25:53.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "$NVMF_PORT", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.045 "hdgst": ${hdgst:-false}, 00:25:53.045 "ddgst": ${ddgst:-false} 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 } 00:25:53.045 EOF 00:25:53.045 )") 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:53.045 { 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme$subsystem", 00:25:53.045 "trtype": "$TEST_TRANSPORT", 00:25:53.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "$NVMF_PORT", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.045 "hdgst": ${hdgst:-false}, 00:25:53.045 "ddgst": ${ddgst:-false} 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 } 00:25:53.045 EOF 00:25:53.045 )") 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:53.045 04:42:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme1", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme2", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme3", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme4", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme5", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme6", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme7", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme8", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme9", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 },{ 00:25:53.045 "params": { 00:25:53.045 "name": "Nvme10", 00:25:53.045 "trtype": "rdma", 00:25:53.045 "traddr": "192.168.100.8", 00:25:53.045 "adrfam": "ipv4", 00:25:53.045 "trsvcid": "4420", 00:25:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:53.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:53.045 "hdgst": false, 00:25:53.045 "ddgst": false 00:25:53.045 }, 00:25:53.045 "method": "bdev_nvme_attach_controller" 00:25:53.045 }' 00:25:53.045 [2024-11-17 04:42:31.824401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.045 [2024-11-17 04:42:31.846870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.982 Running I/O for 1 seconds... 00:25:55.362 3282.00 IOPS, 205.12 MiB/s 00:25:55.362 Latency(us) 00:25:55.362 [2024-11-17T03:42:34.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.362 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme1n1 : 1.17 361.45 22.59 0.00 0.00 172051.97 5976.88 212231.78 00:25:55.362 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme2n1 : 1.17 367.81 22.99 0.00 0.00 168927.78 37539.02 201326.59 00:25:55.362 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme3n1 : 1.19 405.00 25.31 0.00 0.00 152065.69 6868.17 148478.36 00:25:55.362 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme4n1 : 1.19 397.90 24.87 0.00 0.00 152554.70 8965.32 140089.75 00:25:55.362 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme5n1 : 1.19 384.17 24.01 0.00 0.00 155593.73 8703.18 126667.98 00:25:55.362 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme6n1 : 1.19 390.61 24.41 0.00 0.00 151126.02 8493.47 115762.79 00:25:55.362 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme7n1 : 1.19 398.67 24.92 0.00 0.00 146201.36 8545.89 108213.04 00:25:55.362 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme8n1 : 1.19 396.65 24.79 0.00 0.00 144871.99 8650.75 99405.00 00:25:55.362 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme9n1 : 1.18 378.95 23.68 0.00 0.00 150144.73 9961.47 109890.76 00:25:55.362 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:55.362 Verification LBA range: start 0x0 length 0x400 00:25:55.362 Nvme10n1 : 1.18 324.35 20.27 0.00 0.00 172786.07 10957.62 216426.09 00:25:55.362 [2024-11-17T03:42:34.192Z] =================================================================================================================== 00:25:55.362 [2024-11-17T03:42:34.192Z] Total : 3805.56 237.85 0.00 0.00 156078.81 5976.88 216426.09 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.362 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:55.622 rmmod nvme_rdma 00:25:55.622 rmmod nvme_fabrics 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 394230 ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 394230 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 394230 ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 394230 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394230 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394230' 00:25:55.622 killing process with pid 394230 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 394230 00:25:55.622 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 394230 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:56.192 00:25:56.192 real 0m13.375s 00:25:56.192 user 0m28.362s 00:25:56.192 sys 0m6.634s 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:56.192 ************************************ 00:25:56.192 END TEST nvmf_shutdown_tc1 00:25:56.192 ************************************ 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:56.192 ************************************ 00:25:56.192 START TEST nvmf_shutdown_tc2 00:25:56.192 ************************************ 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.192 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:56.193 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:56.193 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:56.193 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:56.193 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:56.193 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:56.194 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:56.194 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:56.194 altname enp217s0f0np0 00:25:56.194 altname ens818f0np0 00:25:56.194 inet 192.168.100.8/24 scope global mlx_0_0 00:25:56.194 valid_lft forever preferred_lft forever 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:56.194 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:56.194 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:56.194 altname enp217s0f1np1 00:25:56.194 altname ens818f1np1 00:25:56.194 inet 192.168.100.9/24 scope global mlx_0_1 00:25:56.194 valid_lft forever preferred_lft forever 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:56.194 04:42:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:56.194 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:56.194 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:56.454 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:56.455 192.168.100.9' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:56.455 192.168.100.9' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:56.455 192.168.100.9' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=395476 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 395476 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 395476 ']' 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.455 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.455 [2024-11-17 04:42:35.157703] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:56.455 [2024-11-17 04:42:35.157758] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.455 [2024-11-17 04:42:35.251074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.455 [2024-11-17 04:42:35.273333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.455 [2024-11-17 04:42:35.273370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.455 [2024-11-17 04:42:35.273380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.455 [2024-11-17 04:42:35.273388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.455 [2024-11-17 04:42:35.273395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.455 [2024-11-17 04:42:35.275201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.455 [2024-11-17 04:42:35.275313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.455 [2024-11-17 04:42:35.275422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.455 [2024-11-17 04:42:35.275424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.715 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.715 [2024-11-17 04:42:35.440272] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1313f50/0x1318400) succeed. 00:25:56.715 [2024-11-17 04:42:35.449423] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1315590/0x1359aa0) succeed. 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.974 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.975 04:42:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.975 Malloc1 00:25:56.975 [2024-11-17 04:42:35.689650] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:56.975 Malloc2 00:25:56.975 Malloc3 00:25:56.975 Malloc4 00:25:57.234 Malloc5 00:25:57.234 Malloc6 00:25:57.234 Malloc7 00:25:57.234 Malloc8 00:25:57.234 Malloc9 00:25:57.494 Malloc10 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=395783 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 395783 /var/tmp/bdevperf.sock 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 395783 ']' 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.494 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.494 { 00:25:57.494 "params": { 00:25:57.494 "name": "Nvme$subsystem", 00:25:57.494 "trtype": "$TEST_TRANSPORT", 00:25:57.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.494 "adrfam": "ipv4", 00:25:57.494 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 [2024-11-17 04:42:36.182430] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:57.495 [2024-11-17 04:42:36.182482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395783 ] 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.495 { 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme$subsystem", 00:25:57.495 "trtype": "$TEST_TRANSPORT", 00:25:57.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "$NVMF_PORT", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.495 "hdgst": ${hdgst:-false}, 00:25:57.495 "ddgst": ${ddgst:-false} 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.495 } 00:25:57.495 EOF 00:25:57.495 )") 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:57.495 04:42:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:57.495 "params": { 00:25:57.495 "name": "Nvme1", 00:25:57.495 "trtype": "rdma", 00:25:57.495 "traddr": "192.168.100.8", 00:25:57.495 "adrfam": "ipv4", 00:25:57.495 "trsvcid": "4420", 00:25:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.495 "hdgst": false, 00:25:57.495 "ddgst": false 00:25:57.495 }, 00:25:57.495 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme2", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme3", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme4", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme5", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme6", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme7", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme8", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme9", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 },{ 00:25:57.496 "params": { 00:25:57.496 "name": "Nvme10", 00:25:57.496 "trtype": "rdma", 00:25:57.496 "traddr": "192.168.100.8", 00:25:57.496 "adrfam": "ipv4", 00:25:57.496 "trsvcid": "4420", 00:25:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:57.496 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:57.496 "hdgst": false, 00:25:57.496 "ddgst": false 00:25:57.496 }, 00:25:57.496 "method": "bdev_nvme_attach_controller" 00:25:57.496 }' 00:25:57.496 [2024-11-17 04:42:36.275090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.496 [2024-11-17 04:42:36.297364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.434 Running I/O for 10 seconds... 00:25:58.434 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.434 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:58.434 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:58.434 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.434 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.694 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=4 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 4 -ge 100 ']' 00:25:58.695 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.955 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.215 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=148 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 148 -ge 100 ']' 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 395783 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 395783 ']' 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 395783 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395783 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395783' 00:25:59.216 killing process with pid 395783 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 395783 00:25:59.216 04:42:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 395783 00:25:59.216 Received shutdown signal, test time was about 0.796346 seconds 00:25:59.216 00:25:59.216 Latency(us) 00:25:59.216 [2024-11-17T03:42:38.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.216 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme1n1 : 0.78 348.64 21.79 0.00 0.00 179896.32 5976.88 199648.87 00:25:59.216 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme2n1 : 0.78 346.96 21.68 0.00 0.00 177080.39 7654.60 187065.96 00:25:59.216 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme3n1 : 0.79 355.40 22.21 0.00 0.00 169554.27 7654.60 177838.49 00:25:59.216 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme4n1 : 0.79 407.01 25.44 0.00 0.00 145176.82 5819.60 130023.42 00:25:59.216 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme5n1 : 0.79 406.20 25.39 0.00 0.00 142818.34 8441.04 125829.12 00:25:59.216 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme6n1 : 0.79 405.44 25.34 0.00 0.00 140001.94 9227.47 117440.51 00:25:59.216 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme7n1 : 0.79 404.63 25.29 0.00 0.00 137432.68 10013.90 108213.04 00:25:59.216 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme8n1 : 0.79 403.96 25.25 0.00 0.00 134152.52 10538.19 100663.30 00:25:59.216 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme9n1 : 0.79 403.09 25.19 0.00 0.00 132231.33 11429.48 91016.40 00:25:59.216 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:59.216 Verification LBA range: start 0x0 length 0x400 00:25:59.216 Nvme10n1 : 0.80 321.78 20.11 0.00 0.00 161679.05 8703.18 203843.17 00:25:59.216 [2024-11-17T03:42:38.046Z] =================================================================================================================== 00:25:59.216 [2024-11-17T03:42:38.046Z] Total : 3803.11 237.69 0.00 0.00 150717.80 5819.60 203843.17 00:25:59.476 04:42:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:00.413 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 395476 00:26:00.413 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:00.413 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:00.413 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:00.413 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:00.672 rmmod nvme_rdma 00:26:00.672 rmmod nvme_fabrics 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 395476 ']' 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 395476 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 395476 ']' 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 395476 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:00.672 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395476 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395476' 00:26:00.673 killing process with pid 395476 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 395476 00:26:00.673 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 395476 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:01.243 00:26:01.243 real 0m4.959s 00:26:01.243 user 0m19.724s 00:26:01.243 sys 0m1.178s 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.243 ************************************ 00:26:01.243 END TEST nvmf_shutdown_tc2 00:26:01.243 ************************************ 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:01.243 ************************************ 00:26:01.243 START TEST nvmf_shutdown_tc3 00:26:01.243 ************************************ 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.243 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:01.244 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:01.244 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:01.244 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:01.244 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:01.244 04:42:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:01.244 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:01.244 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:01.244 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:01.245 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:01.245 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:01.245 altname enp217s0f0np0 00:26:01.245 altname ens818f0np0 00:26:01.245 inet 192.168.100.8/24 scope global mlx_0_0 00:26:01.245 valid_lft forever preferred_lft forever 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:01.245 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:01.245 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:01.245 altname enp217s0f1np1 00:26:01.245 altname ens818f1np1 00:26:01.245 inet 192.168.100.9/24 scope global mlx_0_1 00:26:01.245 valid_lft forever preferred_lft forever 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:01.245 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:01.505 192.168.100.9' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:01.505 192.168.100.9' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:01.505 192.168.100.9' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=396451 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 396451 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 396451 ']' 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.505 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.505 [2024-11-17 04:42:40.202851] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:01.505 [2024-11-17 04:42:40.202907] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.505 [2024-11-17 04:42:40.298568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:01.505 [2024-11-17 04:42:40.321272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.506 [2024-11-17 04:42:40.321312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.506 [2024-11-17 04:42:40.321321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.506 [2024-11-17 04:42:40.321330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.506 [2024-11-17 04:42:40.321337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.506 [2024-11-17 04:42:40.322994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.506 [2024-11-17 04:42:40.323103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.506 [2024-11-17 04:42:40.323189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.506 [2024-11-17 04:42:40.323191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.766 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.766 [2024-11-17 04:42:40.483904] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15b9f50/0x15be400) succeed. 00:26:01.766 [2024-11-17 04:42:40.493029] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15bb590/0x15ffaa0) succeed. 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.026 04:42:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 Malloc1 00:26:02.026 [2024-11-17 04:42:40.730731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:02.026 Malloc2 00:26:02.026 Malloc3 00:26:02.026 Malloc4 00:26:02.285 Malloc5 00:26:02.285 Malloc6 00:26:02.285 Malloc7 00:26:02.285 Malloc8 00:26:02.285 Malloc9 00:26:02.285 Malloc10 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=396749 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 396749 /var/tmp/bdevperf.sock 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 396749 ']' 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.545 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 [2024-11-17 04:42:41.221064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:02.546 [2024-11-17 04:42:41.221118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396749 ] 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.546 { 00:26:02.546 "params": { 00:26:02.546 "name": "Nvme$subsystem", 00:26:02.546 "trtype": "$TEST_TRANSPORT", 00:26:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.546 "adrfam": "ipv4", 00:26:02.546 "trsvcid": "$NVMF_PORT", 00:26:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.546 "hdgst": ${hdgst:-false}, 00:26:02.546 "ddgst": ${ddgst:-false} 00:26:02.546 }, 00:26:02.546 "method": "bdev_nvme_attach_controller" 00:26:02.546 } 00:26:02.546 EOF 00:26:02.546 )") 00:26:02.546 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.547 { 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme$subsystem", 00:26:02.547 "trtype": "$TEST_TRANSPORT", 00:26:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "$NVMF_PORT", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.547 "hdgst": ${hdgst:-false}, 00:26:02.547 "ddgst": ${ddgst:-false} 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 } 00:26:02.547 EOF 00:26:02.547 )") 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.547 { 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme$subsystem", 00:26:02.547 "trtype": "$TEST_TRANSPORT", 00:26:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "$NVMF_PORT", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.547 "hdgst": ${hdgst:-false}, 00:26:02.547 "ddgst": ${ddgst:-false} 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 } 00:26:02.547 EOF 00:26:02.547 )") 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.547 { 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme$subsystem", 00:26:02.547 "trtype": "$TEST_TRANSPORT", 00:26:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "$NVMF_PORT", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.547 "hdgst": ${hdgst:-false}, 00:26:02.547 "ddgst": ${ddgst:-false} 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 } 00:26:02.547 EOF 00:26:02.547 )") 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:02.547 04:42:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme1", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme2", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme3", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme4", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme5", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme6", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme7", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme8", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme9", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 },{ 00:26:02.547 "params": { 00:26:02.547 "name": "Nvme10", 00:26:02.547 "trtype": "rdma", 00:26:02.547 "traddr": "192.168.100.8", 00:26:02.547 "adrfam": "ipv4", 00:26:02.547 "trsvcid": "4420", 00:26:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:02.547 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:02.547 "hdgst": false, 00:26:02.547 "ddgst": false 00:26:02.547 }, 00:26:02.547 "method": "bdev_nvme_attach_controller" 00:26:02.547 }' 00:26:02.547 [2024-11-17 04:42:41.312228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.547 [2024-11-17 04:42:41.334576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.486 Running I/O for 10 seconds... 00:26:03.486 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.486 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:03.486 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:03.486 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.486 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:03.745 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.004 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 396451 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 396451 ']' 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 396451 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396451 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396451' 00:26:04.264 killing process with pid 396451 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 396451 00:26:04.264 04:42:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 396451 00:26:04.783 2645.00 IOPS, 165.31 MiB/s [2024-11-17T03:42:43.613Z] 04:42:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:05.361 [2024-11-17 04:42:43.977820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.977911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:e110 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.977963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.977996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:e110 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.978028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.978061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:e110 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.978096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.978127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:e110 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.980787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.361 [2024-11-17 04:42:43.980833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:05.361 [2024-11-17 04:42:43.981038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.981095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:a134 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.981129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.981163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:a134 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.981197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.361 [2024-11-17 04:42:43.981229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:a134 p:0 m:0 dnr:0 00:26:05.361 [2024-11-17 04:42:43.981262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.362 [2024-11-17 04:42:43.981294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:a134 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:43.983764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.362 [2024-11-17 04:42:43.983809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:05.362 [2024-11-17 04:42:43.983839] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:05.362 [2024-11-17 04:42:43.983887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.362 [2024-11-17 04:42:43.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:081a p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:43.984003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.362 [2024-11-17 04:42:43.984035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:081a p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:43.984068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.362 [2024-11-17 04:42:43.984115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:081a p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:43.984135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.362 [2024-11-17 04:42:43.984154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:081a p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:43.986995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.362 [2024-11-17 04:42:43.987037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:05.362 [2024-11-17 04:42:43.987069] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:05.362 [2024-11-17 04:42:43.987295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:05.362 [2024-11-17 04:42:43.987340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:05.362 [2024-11-17 04:42:43.987376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:05.362 [2024-11-17 04:42:43.987811] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:05.362 [2024-11-17 04:42:43.992552] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.362 [2024-11-17 04:42:43.992613] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.362 [2024-11-17 04:42:43.992641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:26:05.362 [2024-11-17 04:42:43.992786] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.362 [2024-11-17 04:42:43.992821] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.362 [2024-11-17 04:42:43.992846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5300 00:26:05.362 [2024-11-17 04:42:43.997843] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:05.362 [2024-11-17 04:42:44.007891] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:05.362 [2024-11-17 04:42:44.010886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.010903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.010953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.010969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.010981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.010996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184800 00:26:05.362 [2024-11-17 04:42:44.011224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.362 [2024-11-17 04:42:44.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184500 00:26:05.362 [2024-11-17 04:42:44.011543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.011975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.011991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.012028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184500 00:26:05.363 [2024-11-17 04:42:44.012055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x181f00 00:26:05.363 [2024-11-17 04:42:44.012477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.363 [2024-11-17 04:42:44.012492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x181f00 00:26:05.364 [2024-11-17 04:42:44.012508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.012523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x181f00 00:26:05.364 [2024-11-17 04:42:44.012534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.012549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x181f00 00:26:05.364 [2024-11-17 04:42:44.012559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.012575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x181f00 00:26:05.364 [2024-11-17 04:42:44.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.012601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184800 00:26:05.364 [2024-11-17 04:42:44.012612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:7178 p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.015893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:05.364 [2024-11-17 04:42:44.020516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.364 [2024-11-17 04:42:44.020618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100177fc80 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.020658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.020704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100176fc00 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.020736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.020774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100175fb80 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.020806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.020842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100174fb00 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.020873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.020910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100173fa80 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.020951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.020988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100172fa00 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100171f980 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100170f900 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016ff880 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016ef800 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016df780 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016cf700 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016bf680 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016af600 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100169f580 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100168f500 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100167f480 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100166f400 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100165f380 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100164f300 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100163f280 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100162f200 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100161f180 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100160f100 len:0x10000 key:0x181d00 00:26:05.364 [2024-11-17 04:42:44.021978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.021999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019f0000 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.022040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019dff80 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.022080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019cff00 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.022119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019bfe80 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.022163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019afe00 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.364 [2024-11-17 04:42:44.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100199fd80 len:0x10000 key:0x181b00 00:26:05.364 [2024-11-17 04:42:44.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100198fd00 len:0x10000 key:0x181b00 00:26:05.365 [2024-11-17 04:42:44.022260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100197fc80 len:0x10000 key:0x181b00 00:26:05.365 [2024-11-17 04:42:44.022300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9f000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7e000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb5d000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb3c000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb1b000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bafa000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad9000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab8000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba97000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba76000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba55000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d588000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a9000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5ca000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5eb000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d60c000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085ff000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.022991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085de000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085bd000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000859c000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000857b000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000855a000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008539000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008518000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084f7000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084d6000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084b5000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008494000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008473000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008452000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eda3000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.023581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed82000 len:0x10000 key:0x183600 00:26:05.365 [2024-11-17 04:42:44.023599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:aaac p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.026046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b5fb80 len:0x10000 key:0x183000 00:26:05.365 [2024-11-17 04:42:44.026090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.365 [2024-11-17 04:42:44.026133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b4fb00 len:0x10000 key:0x183000 00:26:05.365 [2024-11-17 04:42:44.026165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b3fa80 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b2fa00 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b1f980 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b0f900 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aff880 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aef800 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001adf780 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001acf700 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001abf680 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aaf600 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a9f580 len:0x10000 key:0x183000 00:26:05.366 [2024-11-17 04:42:44.026654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75e000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b73d000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b71c000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6fb000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6da000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b9000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b698000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.026969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b677000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.026987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b656000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b635000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7f3000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f814000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f835000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77f000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bf000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09e000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd86000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dda7000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ddc8000 len:0x10000 key:0x183600 00:26:05.366 [2024-11-17 04:42:44.027432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.366 [2024-11-17 04:42:44.027454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dde9000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe65000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe44000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe23000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe02000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fde1000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdc0000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a1f000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089fe000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089dd000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089bc000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000899b000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000897a000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.027974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.027996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102c7000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102a6000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010285000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010264000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010243000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010222000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010201000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101e0000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b69000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b48000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008e3f000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008e1e000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.028720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008dfd000 len:0x10000 key:0x183600 00:26:05.367 [2024-11-17 04:42:44.028738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:485a p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.031328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f3fa80 len:0x10000 key:0x183d00 00:26:05.367 [2024-11-17 04:42:44.031354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.031380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f2fa00 len:0x10000 key:0x183d00 00:26:05.367 [2024-11-17 04:42:44.031399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.031421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f1f980 len:0x10000 key:0x183d00 00:26:05.367 [2024-11-17 04:42:44.031439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.367 [2024-11-17 04:42:44.031461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f0f900 len:0x10000 key:0x183d00 00:26:05.367 [2024-11-17 04:42:44.031480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eff880 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eef800 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001edf780 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ecf700 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ebf680 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eaf600 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e9f580 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e8f500 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e7f480 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e6f400 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e5f380 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e4f300 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.031974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.031996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e3f280 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.032014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e2f200 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.032054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e1f180 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.032095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e0f100 len:0x10000 key:0x183d00 00:26:05.368 [2024-11-17 04:42:44.032134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021f0000 len:0x10000 key:0x183e00 00:26:05.368 [2024-11-17 04:42:44.032174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021dff80 len:0x10000 key:0x183e00 00:26:05.368 [2024-11-17 04:42:44.032214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021cff00 len:0x10000 key:0x183e00 00:26:05.368 [2024-11-17 04:42:44.032257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021bfe80 len:0x10000 key:0x183e00 00:26:05.368 [2024-11-17 04:42:44.032297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021afe00 len:0x10000 key:0x183e00 00:26:05.368 [2024-11-17 04:42:44.032337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e2ae000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdd0000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1b2000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1d3000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009787000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000097a8000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f7000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e818000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f8000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001032a000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010309000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102e8000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105df000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105be000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.368 [2024-11-17 04:42:44.032927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001059d000 len:0x10000 key:0x183600 00:26:05.368 [2024-11-17 04:42:44.032954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.032976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001057c000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.032995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e5c6000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e5e7000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e608000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e629000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e64a000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e66b000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e68c000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf7d000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9e000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbf000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e542000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e521000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e500000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000904f000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000902e000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000900d000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008fec000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008fcb000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008faa000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f89000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f68000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f47000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.033911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f26000 len:0x10000 key:0x183600 00:26:05.369 [2024-11-17 04:42:44.033929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:ca12 p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.036706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022af600 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.036751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.036793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100229f580 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.036824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.036861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100228f500 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.036899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.036947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100227f480 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.036979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100226f400 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100225f380 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100224f300 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100223f280 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100222f200 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221f180 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220f100 len:0x10000 key:0x184900 00:26:05.369 [2024-11-17 04:42:44.037454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.369 [2024-11-17 04:42:44.037475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025f0000 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cff00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfe80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afe00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fd80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fd00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fc80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256fc00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255fb80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254fb00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253fa80 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.037964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252fa00 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.037982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f980 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f900 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff880 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ef800 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024df780 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024cf700 len:0x10000 key:0x184d00 00:26:05.370 [2024-11-17 04:42:44.038226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3d7000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f8000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e419000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e43a000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e45b000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e47c000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a40000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a61000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a82000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.370 [2024-11-17 04:42:44.038687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ae5000 len:0x10000 key:0x183600 00:26:05.370 [2024-11-17 04:42:44.038705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010012000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010033000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d6000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b5000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000091fc000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000091db000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.038979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000091ba000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.038997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009199000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009178000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009157000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009136000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009115000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000090f4000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000090d3000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000090b2000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009091000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009070000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000946f000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000944e000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000942d000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9f000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dab0000 len:0x10000 key:0x183600 00:26:05.371 [2024-11-17 04:42:44.039603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:f49e p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ff880 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.041899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.041980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ef800 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026df780 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026cf700 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026bf680 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026af600 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100269f580 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100268f500 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100267f480 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x184700 00:26:05.371 [2024-11-17 04:42:44.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.371 [2024-11-17 04:42:44.042599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x184700 00:26:05.372 [2024-11-17 04:42:44.042617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x184700 00:26:05.372 [2024-11-17 04:42:44.042657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x184700 00:26:05.372 [2024-11-17 04:42:44.042697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fd80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.042979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fd00 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.042997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fc80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.043037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296fc00 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.043077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100295fb80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.043118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100294fb00 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.043158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100293fa80 len:0x10000 key:0x184a00 00:26:05.372 [2024-11-17 04:42:44.043201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c37c000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c39d000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3be000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3df000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87f000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85e000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d83d000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d81c000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010474000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010495000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b6000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d7000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008308000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082e7000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb51000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb30000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000965e000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000963d000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.043963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000961c000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.043981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.044002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095fb000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.044021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.044042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095da000 len:0x10000 key:0x183600 00:26:05.372 [2024-11-17 04:42:44.044061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.372 [2024-11-17 04:42:44.044083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095b9000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009598000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009577000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009556000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009535000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009514000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000094f3000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000094d2000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000094b1000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009490000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000988f000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f037000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f016000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff5000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.044654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd4000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.044673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:fa38 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.046957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184200 00:26:05.373 [2024-11-17 04:42:44.047712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb9b000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbc000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbdd000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfe000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1f000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca72000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.047969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.373 [2024-11-17 04:42:44.047991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca93000 len:0x10000 key:0x183600 00:26:05.373 [2024-11-17 04:42:44.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab4000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad5000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf6000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ee0000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f01000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000086e6000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008707000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb7a000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb59000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecbc000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecdd000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8c1000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8a0000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008431000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008410000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a9f000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a7e000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a5d000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a3c000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a1b000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099fa000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099d9000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099b8000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.048971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009997000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009976000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee48000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee69000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee8a000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eeab000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db13000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daf2000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dad1000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb17000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008683000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008662000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.374 [2024-11-17 04:42:44.049492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008641000 len:0x10000 key:0x183600 00:26:05.374 [2024-11-17 04:42:44.049511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008620000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ebf000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e9e000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e7d000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e5c000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b191000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.049780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b170000 len:0x10000 key:0x183600 00:26:05.375 [2024-11-17 04:42:44.049802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10421 cdw0:66746000 sqhd:09e8 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.052525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.052556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:037a p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.052576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.052594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:037a p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.052614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.052632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:037a p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.052652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.052670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:0 sqhd:037a p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.055214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.375 [2024-11-17 04:42:44.055257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:05.375 [2024-11-17 04:42:44.055288] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:05.375 [2024-11-17 04:42:44.055336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.055369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:f970 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.055402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.055436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:f970 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.055468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.055498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:f970 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.055530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.055561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:f970 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.057899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.375 [2024-11-17 04:42:44.057955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:05.375 [2024-11-17 04:42:44.057986] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:05.375 [2024-11-17 04:42:44.058029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.058062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:79a2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.058094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:79a2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.058164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.058195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:79a2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.058227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.058257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:79a2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.060596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.375 [2024-11-17 04:42:44.060621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:05.375 [2024-11-17 04:42:44.060639] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:05.375 [2024-11-17 04:42:44.060666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.060685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:df28 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.060704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.060722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:df28 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.060742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.060760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:df28 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.060779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.060796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:df28 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.062916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.375 [2024-11-17 04:42:44.062971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:05.375 [2024-11-17 04:42:44.063002] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:05.375 [2024-11-17 04:42:44.063048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.063082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:78f2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.063113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.063143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:78f2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.063174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.063205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:78f2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.063244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.375 [2024-11-17 04:42:44.063275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:78f2 p:0 m:0 dnr:0 00:26:05.375 [2024-11-17 04:42:44.065399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.375 [2024-11-17 04:42:44.065423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:05.375 [2024-11-17 04:42:44.065441] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:05.376 [2024-11-17 04:42:44.065466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.376 [2024-11-17 04:42:44.065485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:8c7e p:0 m:0 dnr:0 00:26:05.376 [2024-11-17 04:42:44.065504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.376 [2024-11-17 04:42:44.065522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:8c7e p:0 m:0 dnr:0 00:26:05.376 [2024-11-17 04:42:44.065541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.376 [2024-11-17 04:42:44.065559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:8c7e p:0 m:0 dnr:0 00:26:05.376 [2024-11-17 04:42:44.065578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.376 [2024-11-17 04:42:44.065595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10421 cdw0:1 sqhd:8c7e p:0 m:0 dnr:0 00:26:05.376 [2024-11-17 04:42:44.089393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:05.376 [2024-11-17 04:42:44.089413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:05.376 [2024-11-17 04:42:44.089423] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:05.376 [2024-11-17 04:42:44.089447] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.089457] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.089464] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688d5c0 00:26:05.376 [2024-11-17 04:42:44.097053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:05.376 [2024-11-17 04:42:44.097080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:05.376 [2024-11-17 04:42:44.097093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:05.376 [2024-11-17 04:42:44.097179] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:05.376 [2024-11-17 04:42:44.097195] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:05.376 [2024-11-17 04:42:44.097208] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:05.376 [2024-11-17 04:42:44.097304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:05.376 [2024-11-17 04:42:44.097321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:05.376 [2024-11-17 04:42:44.097331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:05.376 task offset: 24576 on job bdev=Nvme10n1 fails 00:26:05.376 00:26:05.376 Latency(us) 00:26:05.376 [2024-11-17T03:42:44.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme1n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme1n1 : 1.90 134.95 8.43 33.74 0.00 377099.39 59139.69 1067030.94 00:26:05.376 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme2n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme2n1 : 1.90 134.89 8.43 33.72 0.00 374184.02 40265.32 1067030.94 00:26:05.376 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme3n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme3n1 : 1.90 134.83 8.43 33.71 0.00 371325.99 44669.34 1067030.94 00:26:05.376 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme4n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme4n1 : 1.90 151.61 9.48 33.69 0.00 334866.62 7916.75 1120718.03 00:26:05.376 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme5n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme5n1 : 1.90 141.55 8.85 33.68 0.00 351686.25 13631.49 1114007.14 00:26:05.376 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme6n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme6n1 : 1.90 147.80 9.24 33.66 0.00 336549.59 17511.22 1114007.14 00:26:05.376 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme7n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme7n1 : 1.90 154.05 9.63 33.65 0.00 322677.20 4325.38 1107296.26 00:26:05.376 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme8n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme8n1 : 1.90 149.25 9.33 33.63 0.00 328342.29 31037.85 1100585.37 00:26:05.376 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme9n1 ended in about 1.90 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme9n1 : 1.90 141.83 8.86 33.62 0.00 338850.16 34183.58 1093874.48 00:26:05.376 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:05.376 Job: Nvme10n1 ended in about 1.82 seconds with error 00:26:05.376 Verification LBA range: start 0x0 length 0x400 00:26:05.376 Nvme10n1 : 1.82 105.49 6.59 35.16 0.00 417420.08 74239.18 1060320.05 00:26:05.376 [2024-11-17T03:42:44.206Z] =================================================================================================================== 00:26:05.376 [2024-11-17T03:42:44.206Z] Total : 1396.24 87.27 338.26 0.00 353066.53 4325.38 1120718.03 00:26:05.376 [2024-11-17 04:42:44.123871] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:05.376 [2024-11-17 04:42:44.127417] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.127441] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.127452] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d9c80 00:26:05.376 [2024-11-17 04:42:44.135508] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.135527] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.135536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d2900 00:26:05.376 [2024-11-17 04:42:44.135640] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.135651] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.135658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c6340 00:26:05.376 [2024-11-17 04:42:44.135742] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.135752] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.135760] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c60c0 00:26:05.376 [2024-11-17 04:42:44.136362] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.136374] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.136380] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688ee00 00:26:05.376 [2024-11-17 04:42:44.136464] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.136474] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.136481] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688dd80 00:26:05.376 [2024-11-17 04:42:44.136561] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:05.376 [2024-11-17 04:42:44.136571] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:05.376 [2024-11-17 04:42:44.136578] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168a8500 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 396749 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 396749 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.636 04:42:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 396749 00:26:06.205 [2024-11-17 04:42:44.997015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.205 [2024-11-17 04:42:44.997079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:06.205 [2024-11-17 04:42:44.999000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.205 [2024-11-17 04:42:44.999051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:06.205 [2024-11-17 04:42:44.999171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:06.205 [2024-11-17 04:42:44.999211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:06.205 [2024-11-17 04:42:44.999230] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:06.205 [2024-11-17 04:42:44.999251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:06.205 [2024-11-17 04:42:44.999275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:06.205 [2024-11-17 04:42:44.999292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:06.205 [2024-11-17 04:42:44.999310] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:26:06.205 [2024-11-17 04:42:44.999328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:06.464 [2024-11-17 04:42:45.093682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.093701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.093734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:06.464 [2024-11-17 04:42:45.093744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:06.464 [2024-11-17 04:42:45.093753] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:26:06.464 [2024-11-17 04:42:45.093763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:06.464 [2024-11-17 04:42:45.131533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.131550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.131580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:06.464 [2024-11-17 04:42:45.131589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:06.464 [2024-11-17 04:42:45.131598] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:26:06.464 [2024-11-17 04:42:45.131607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:06.464 [2024-11-17 04:42:45.139544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.139560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.140791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.140804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.142166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.142179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.143418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.143432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.144641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.144653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.145879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.464 [2024-11-17 04:42:45.145892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:06.464 [2024-11-17 04:42:45.145901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:06.464 [2024-11-17 04:42:45.145910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:06.464 [2024-11-17 04:42:45.145919] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:26:06.464 [2024-11-17 04:42:45.145929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:06.465 [2024-11-17 04:42:45.145945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:06.465 [2024-11-17 04:42:45.145954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:06.465 [2024-11-17 04:42:45.145962] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:26:06.465 [2024-11-17 04:42:45.145971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:06.465 [2024-11-17 04:42:45.145981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:06.465 [2024-11-17 04:42:45.145990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:06.465 [2024-11-17 04:42:45.145998] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:26:06.465 [2024-11-17 04:42:45.146008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:06.465 [2024-11-17 04:42:45.146079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:06.465 [2024-11-17 04:42:45.146090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:06.465 [2024-11-17 04:42:45.146098] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:26:06.465 [2024-11-17 04:42:45.146107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:06.465 [2024-11-17 04:42:45.146119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:06.465 [2024-11-17 04:42:45.146128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:06.465 [2024-11-17 04:42:45.146137] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:26:06.465 [2024-11-17 04:42:45.146146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:06.465 [2024-11-17 04:42:45.146156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:06.465 [2024-11-17 04:42:45.146164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:06.465 [2024-11-17 04:42:45.146176] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:26:06.465 [2024-11-17 04:42:45.146185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:06.725 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:06.726 rmmod nvme_rdma 00:26:06.726 rmmod nvme_fabrics 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 396451 ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 396451 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 396451 ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 396451 00:26:06.726 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (396451) - No such process 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 396451 is not found' 00:26:06.726 Process with pid 396451 is not found 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:06.726 00:26:06.726 real 0m5.492s 00:26:06.726 user 0m15.833s 00:26:06.726 sys 0m1.398s 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:06.726 ************************************ 00:26:06.726 END TEST nvmf_shutdown_tc3 00:26:06.726 ************************************ 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:06.726 ************************************ 00:26:06.726 START TEST nvmf_shutdown_tc4 00:26:06.726 ************************************ 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.726 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:06.727 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:06.727 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:06.727 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:06.727 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:06.727 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:06.987 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:06.988 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:06.988 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:06.988 altname enp217s0f0np0 00:26:06.988 altname ens818f0np0 00:26:06.988 inet 192.168.100.8/24 scope global mlx_0_0 00:26:06.988 valid_lft forever preferred_lft forever 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:06.988 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:06.988 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:06.988 altname enp217s0f1np1 00:26:06.988 altname ens818f1np1 00:26:06.988 inet 192.168.100.9/24 scope global mlx_0_1 00:26:06.988 valid_lft forever preferred_lft forever 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:06.988 192.168.100.9' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:06.988 192.168.100.9' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:06.988 192.168.100.9' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=397662 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 397662 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 397662 ']' 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.988 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.989 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.989 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.989 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.989 [2024-11-17 04:42:45.774427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:06.989 [2024-11-17 04:42:45.774472] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.249 [2024-11-17 04:42:45.864190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.249 [2024-11-17 04:42:45.886679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.249 [2024-11-17 04:42:45.886717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.249 [2024-11-17 04:42:45.886726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.249 [2024-11-17 04:42:45.886734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.249 [2024-11-17 04:42:45.886741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.249 [2024-11-17 04:42:45.888461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.249 [2024-11-17 04:42:45.888573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.249 [2024-11-17 04:42:45.888681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.249 [2024-11-17 04:42:45.888682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:07.249 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.249 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:07.249 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.249 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.249 04:42:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:07.249 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.249 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:07.249 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.249 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:07.249 [2024-11-17 04:42:46.053405] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e94f50/0x1e99400) succeed. 00:26:07.249 [2024-11-17 04:42:46.062704] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e96590/0x1edaaa0) succeed. 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.509 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:07.509 Malloc1 00:26:07.509 [2024-11-17 04:42:46.300689] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:07.509 Malloc2 00:26:07.769 Malloc3 00:26:07.769 Malloc4 00:26:07.769 Malloc5 00:26:07.769 Malloc6 00:26:07.769 Malloc7 00:26:07.769 Malloc8 00:26:08.028 Malloc9 00:26:08.028 Malloc10 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=397753 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:26:08.028 04:42:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:08.028 [2024-11-17 04:42:46.850608] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 397662 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 397662 ']' 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 397662 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397662 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397662' 00:26:13.327 killing process with pid 397662 00:26:13.327 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 397662 00:26:13.328 04:42:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 397662 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 starting I/O failed: -6 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 NVMe io qpair process completion error 00:26:13.328 starting I/O failed: -6 00:26:13.587 04:42:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 [2024-11-17 04:42:52.930281] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.157 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 [2024-11-17 04:42:52.941090] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.158 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 [2024-11-17 04:42:52.953461] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 starting I/O failed: -6 00:26:14.159 [2024-11-17 04:42:52.966226] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.159 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 [2024-11-17 04:42:52.979160] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.160 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 [2024-11-17 04:42:52.991719] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.421 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 Write completed with error (sct=0, sc=8) 00:26:14.422 [2024-11-17 04:42:53.004411] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:14.422 NVMe io qpair process completion error 00:26:14.422 NVMe io qpair process completion error 00:26:14.422 NVMe io qpair process completion error 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 397753 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 397753 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.681 04:42:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 397753 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 [2024-11-17 04:42:54.008137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.251 [2024-11-17 04:42:54.008201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.251 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 [2024-11-17 04:42:54.010680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.252 [2024-11-17 04:42:54.010726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 [2024-11-17 04:42:54.013248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.252 [2024-11-17 04:42:54.013290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 [2024-11-17 04:42:54.018052] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 [2024-11-17 04:42:54.020746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.252 [2024-11-17 04:42:54.020790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.252 [2024-11-17 04:42:54.022790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.252 [2024-11-17 04:42:54.022841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:15.252 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 [2024-11-17 04:42:54.025006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.253 [2024-11-17 04:42:54.025047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 [2024-11-17 04:42:54.027521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.253 [2024-11-17 04:42:54.027564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 [2024-11-17 04:42:54.033218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.253 [2024-11-17 04:42:54.033262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 [2024-11-17 04:42:54.035179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.253 [2024-11-17 04:42:54.035222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.253 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 Write completed with error (sct=0, sc=8) 00:26:15.254 [2024-11-17 04:42:54.076201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:15.254 [2024-11-17 04:42:54.076229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:15.254 Initializing NVMe Controllers 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:26:15.254 Controller IO queue size 128, less than required. 00:26:15.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:15.254 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:15.254 Initialization complete. Launching workers. 00:26:15.254 ======================================================== 00:26:15.254 Latency(us) 00:26:15.254 Device Information : IOPS MiB/s Average min max 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1548.26 66.53 81528.49 114.31 1187020.31 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1543.58 66.33 81871.39 115.51 1199644.46 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1537.05 66.05 82320.47 114.18 1226068.78 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1565.00 67.25 95349.19 104.07 2218645.05 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1548.60 66.54 81773.53 110.40 1223305.46 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1550.27 66.61 81807.96 113.21 1213842.31 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1549.10 66.56 81765.69 108.10 1224829.97 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1567.51 67.35 95146.49 114.39 2079353.68 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1516.29 65.15 83842.75 114.00 1276191.51 00:26:15.254 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1559.31 67.00 95709.93 114.12 2153089.63 00:26:15.254 ======================================================== 00:26:15.254 Total : 15484.98 665.37 86147.48 104.07 2218645.05 00:26:15.254 00:26:15.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:15.515 rmmod nvme_rdma 00:26:15.515 rmmod nvme_fabrics 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 397662 ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 397662 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 397662 ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 397662 00:26:15.515 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (397662) - No such process 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 397662 is not found' 00:26:15.515 Process with pid 397662 is not found 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:15.515 00:26:15.515 real 0m8.721s 00:26:15.515 user 0m32.072s 00:26:15.515 sys 0m1.435s 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:15.515 ************************************ 00:26:15.515 END TEST nvmf_shutdown_tc4 00:26:15.515 ************************************ 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:15.515 00:26:15.515 real 0m33.136s 00:26:15.515 user 1m36.254s 00:26:15.515 sys 0m11.019s 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:15.515 ************************************ 00:26:15.515 END TEST nvmf_shutdown 00:26:15.515 ************************************ 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.515 ************************************ 00:26:15.515 START TEST nvmf_nsid 00:26:15.515 ************************************ 00:26:15.515 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:26:15.775 * Looking for test storage... 00:26:15.775 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.775 --rc genhtml_branch_coverage=1 00:26:15.775 --rc genhtml_function_coverage=1 00:26:15.775 --rc genhtml_legend=1 00:26:15.775 --rc geninfo_all_blocks=1 00:26:15.775 --rc geninfo_unexecuted_blocks=1 00:26:15.775 00:26:15.775 ' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.775 --rc genhtml_branch_coverage=1 00:26:15.775 --rc genhtml_function_coverage=1 00:26:15.775 --rc genhtml_legend=1 00:26:15.775 --rc geninfo_all_blocks=1 00:26:15.775 --rc geninfo_unexecuted_blocks=1 00:26:15.775 00:26:15.775 ' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.775 --rc genhtml_branch_coverage=1 00:26:15.775 --rc genhtml_function_coverage=1 00:26:15.775 --rc genhtml_legend=1 00:26:15.775 --rc geninfo_all_blocks=1 00:26:15.775 --rc geninfo_unexecuted_blocks=1 00:26:15.775 00:26:15.775 ' 00:26:15.775 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.776 --rc genhtml_branch_coverage=1 00:26:15.776 --rc genhtml_function_coverage=1 00:26:15.776 --rc genhtml_legend=1 00:26:15.776 --rc geninfo_all_blocks=1 00:26:15.776 --rc geninfo_unexecuted_blocks=1 00:26:15.776 00:26:15.776 ' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.776 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.776 04:42:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:23.903 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:23.903 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:23.904 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:23.904 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:23.904 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:23.904 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:23.904 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:23.904 altname enp217s0f0np0 00:26:23.904 altname ens818f0np0 00:26:23.904 inet 192.168.100.8/24 scope global mlx_0_0 00:26:23.904 valid_lft forever preferred_lft forever 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:23.904 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:23.904 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:23.904 altname enp217s0f1np1 00:26:23.904 altname ens818f1np1 00:26:23.904 inet 192.168.100.9/24 scope global mlx_0_1 00:26:23.904 valid_lft forever preferred_lft forever 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:23.904 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:23.905 192.168.100.9' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:23.905 192.168.100.9' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:23.905 192.168.100.9' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=402370 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 402370 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 402370 ']' 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.905 04:43:01 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.905 [2024-11-17 04:43:01.801258] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:23.905 [2024-11-17 04:43:01.801321] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.905 [2024-11-17 04:43:01.891404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.905 [2024-11-17 04:43:01.912791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.905 [2024-11-17 04:43:01.912828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.905 [2024-11-17 04:43:01.912837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.905 [2024-11-17 04:43:01.912845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.905 [2024-11-17 04:43:01.912852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.905 [2024-11-17 04:43:01.913436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=402507 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1e59fd0b-34b5-4693-a7f2-6b9c84f11ecb 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d7675103-6cad-42a8-864d-8aad086f8af3 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2762d2a8-7b85-4a56-b778-04d4755ca6e7 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.905 null0 00:26:23.905 [2024-11-17 04:43:02.107085] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:23.905 [2024-11-17 04:43:02.107134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402507 ] 00:26:23.905 null1 00:26:23.905 null2 00:26:23.905 [2024-11-17 04:43:02.141130] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b40f40/0x1a1d8b0) succeed. 00:26:23.905 [2024-11-17 04:43:02.149744] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b423a0/0x1a5ef50) succeed. 00:26:23.905 [2024-11-17 04:43:02.199077] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:23.905 [2024-11-17 04:43:02.200059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.905 [2024-11-17 04:43:02.222981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 402507 /var/tmp/tgt2.sock 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 402507 ']' 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:23.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:23.905 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:24.165 [2024-11-17 04:43:02.768515] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x250c720/0x251be30) succeed. 00:26:24.165 [2024-11-17 04:43:02.779271] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x28a29c0/0x255d4d0) succeed. 00:26:24.165 [2024-11-17 04:43:02.821546] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:24.165 nvme0n1 nvme0n2 00:26:24.165 nvme1n1 00:26:24.165 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:24.165 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:24.165 04:43:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1e59fd0b-34b5-4693-a7f2-6b9c84f11ecb 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1e59fd0b34b54693a7f26b9c84f11ecb 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1E59FD0B34B54693A7F26B9C84F11ECB 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1E59FD0B34B54693A7F26B9C84F11ECB == \1\E\5\9\F\D\0\B\3\4\B\5\4\6\9\3\A\7\F\2\6\B\9\C\8\4\F\1\1\E\C\B ]] 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d7675103-6cad-42a8-864d-8aad086f8af3 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d76751036cad42a8864d8aad086f8af3 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D76751036CAD42A8864D8AAD086F8AF3 00:26:32.287 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D76751036CAD42A8864D8AAD086F8AF3 == \D\7\6\7\5\1\0\3\6\C\A\D\4\2\A\8\8\6\4\D\8\A\A\D\0\8\6\F\8\A\F\3 ]] 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2762d2a8-7b85-4a56-b778-04d4755ca6e7 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2762d2a87b854a56b77804d4755ca6e7 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2762D2A87B854A56B77804D4755CA6E7 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2762D2A87B854A56B77804D4755CA6E7 == \2\7\6\2\D\2\A\8\7\B\8\5\4\A\5\6\B\7\7\8\0\4\D\4\7\5\5\C\A\6\E\7 ]] 00:26:32.288 04:43:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 402507 ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402507' 00:26:38.858 killing process with pid 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 402507 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:38.858 rmmod nvme_rdma 00:26:38.858 rmmod nvme_fabrics 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 402370 ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 402370 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 402370 ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 402370 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402370 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402370' 00:26:38.858 killing process with pid 402370 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 402370 00:26:38.858 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 402370 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:39.118 00:26:39.118 real 0m23.448s 00:26:39.118 user 0m33.205s 00:26:39.118 sys 0m6.821s 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 END TEST nvmf_nsid 00:26:39.118 ************************************ 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:39.118 00:26:39.118 real 15m55.336s 00:26:39.118 user 47m55.237s 00:26:39.118 sys 3m25.359s 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.118 04:43:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 END TEST nvmf_target_extra 00:26:39.118 ************************************ 00:26:39.118 04:43:17 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:39.118 04:43:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.118 04:43:17 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.118 04:43:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 START TEST nvmf_host 00:26:39.118 ************************************ 00:26:39.118 04:43:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:39.377 * Looking for test storage... 00:26:39.377 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:26:39.377 04:43:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.377 04:43:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.377 04:43:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:39.377 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.378 --rc genhtml_branch_coverage=1 00:26:39.378 --rc genhtml_function_coverage=1 00:26:39.378 --rc genhtml_legend=1 00:26:39.378 --rc geninfo_all_blocks=1 00:26:39.378 --rc geninfo_unexecuted_blocks=1 00:26:39.378 00:26:39.378 ' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.378 --rc genhtml_branch_coverage=1 00:26:39.378 --rc genhtml_function_coverage=1 00:26:39.378 --rc genhtml_legend=1 00:26:39.378 --rc geninfo_all_blocks=1 00:26:39.378 --rc geninfo_unexecuted_blocks=1 00:26:39.378 00:26:39.378 ' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.378 --rc genhtml_branch_coverage=1 00:26:39.378 --rc genhtml_function_coverage=1 00:26:39.378 --rc genhtml_legend=1 00:26:39.378 --rc geninfo_all_blocks=1 00:26:39.378 --rc geninfo_unexecuted_blocks=1 00:26:39.378 00:26:39.378 ' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.378 --rc genhtml_branch_coverage=1 00:26:39.378 --rc genhtml_function_coverage=1 00:26:39.378 --rc genhtml_legend=1 00:26:39.378 --rc geninfo_all_blocks=1 00:26:39.378 --rc geninfo_unexecuted_blocks=1 00:26:39.378 00:26:39.378 ' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.378 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.378 ************************************ 00:26:39.378 START TEST nvmf_multicontroller 00:26:39.378 ************************************ 00:26:39.378 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:39.638 * Looking for test storage... 00:26:39.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.638 --rc genhtml_branch_coverage=1 00:26:39.638 --rc genhtml_function_coverage=1 00:26:39.638 --rc genhtml_legend=1 00:26:39.638 --rc geninfo_all_blocks=1 00:26:39.638 --rc geninfo_unexecuted_blocks=1 00:26:39.638 00:26:39.638 ' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.638 --rc genhtml_branch_coverage=1 00:26:39.638 --rc genhtml_function_coverage=1 00:26:39.638 --rc genhtml_legend=1 00:26:39.638 --rc geninfo_all_blocks=1 00:26:39.638 --rc geninfo_unexecuted_blocks=1 00:26:39.638 00:26:39.638 ' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.638 --rc genhtml_branch_coverage=1 00:26:39.638 --rc genhtml_function_coverage=1 00:26:39.638 --rc genhtml_legend=1 00:26:39.638 --rc geninfo_all_blocks=1 00:26:39.638 --rc geninfo_unexecuted_blocks=1 00:26:39.638 00:26:39.638 ' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.638 --rc genhtml_branch_coverage=1 00:26:39.638 --rc genhtml_function_coverage=1 00:26:39.638 --rc genhtml_legend=1 00:26:39.638 --rc geninfo_all_blocks=1 00:26:39.638 --rc geninfo_unexecuted_blocks=1 00:26:39.638 00:26:39.638 ' 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.638 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.639 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:26:39.639 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:26:39.639 00:26:39.639 real 0m0.235s 00:26:39.639 user 0m0.122s 00:26:39.639 sys 0m0.131s 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.639 ************************************ 00:26:39.639 END TEST nvmf_multicontroller 00:26:39.639 ************************************ 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.639 04:43:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.899 ************************************ 00:26:39.899 START TEST nvmf_aer 00:26:39.899 ************************************ 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:39.899 * Looking for test storage... 00:26:39.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.899 --rc genhtml_branch_coverage=1 00:26:39.899 --rc genhtml_function_coverage=1 00:26:39.899 --rc genhtml_legend=1 00:26:39.899 --rc geninfo_all_blocks=1 00:26:39.899 --rc geninfo_unexecuted_blocks=1 00:26:39.899 00:26:39.899 ' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.899 --rc genhtml_branch_coverage=1 00:26:39.899 --rc genhtml_function_coverage=1 00:26:39.899 --rc genhtml_legend=1 00:26:39.899 --rc geninfo_all_blocks=1 00:26:39.899 --rc geninfo_unexecuted_blocks=1 00:26:39.899 00:26:39.899 ' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.899 --rc genhtml_branch_coverage=1 00:26:39.899 --rc genhtml_function_coverage=1 00:26:39.899 --rc genhtml_legend=1 00:26:39.899 --rc geninfo_all_blocks=1 00:26:39.899 --rc geninfo_unexecuted_blocks=1 00:26:39.899 00:26:39.899 ' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.899 --rc genhtml_branch_coverage=1 00:26:39.899 --rc genhtml_function_coverage=1 00:26:39.899 --rc genhtml_legend=1 00:26:39.899 --rc geninfo_all_blocks=1 00:26:39.899 --rc geninfo_unexecuted_blocks=1 00:26:39.899 00:26:39.899 ' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.899 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.900 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.900 04:43:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:48.027 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:48.027 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:48.027 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:48.027 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.027 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:48.028 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.028 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:48.028 altname enp217s0f0np0 00:26:48.028 altname ens818f0np0 00:26:48.028 inet 192.168.100.8/24 scope global mlx_0_0 00:26:48.028 valid_lft forever preferred_lft forever 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:48.028 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.028 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:48.028 altname enp217s0f1np1 00:26:48.028 altname ens818f1np1 00:26:48.028 inet 192.168.100.9/24 scope global mlx_0_1 00:26:48.028 valid_lft forever preferred_lft forever 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:48.028 192.168.100.9' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:48.028 192.168.100.9' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:48.028 192.168.100.9' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=408550 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 408550 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 408550 ']' 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.028 04:43:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.028 [2024-11-17 04:43:25.974555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:48.028 [2024-11-17 04:43:25.974605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.028 [2024-11-17 04:43:26.066376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.028 [2024-11-17 04:43:26.089904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.028 [2024-11-17 04:43:26.089962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.028 [2024-11-17 04:43:26.089972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.028 [2024-11-17 04:43:26.089980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.028 [2024-11-17 04:43:26.090004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.028 [2024-11-17 04:43:26.091679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.028 [2024-11-17 04:43:26.091790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.028 [2024-11-17 04:43:26.091901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.028 [2024-11-17 04:43:26.091903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.028 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.028 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:48.028 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.028 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 [2024-11-17 04:43:26.259445] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9b6c50/0x9bb100) succeed. 00:26:48.029 [2024-11-17 04:43:26.269571] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9b8290/0x9fc7a0) succeed. 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 Malloc0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 [2024-11-17 04:43:26.442373] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 [ 00:26:48.029 { 00:26:48.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:48.029 "subtype": "Discovery", 00:26:48.029 "listen_addresses": [], 00:26:48.029 "allow_any_host": true, 00:26:48.029 "hosts": [] 00:26:48.029 }, 00:26:48.029 { 00:26:48.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.029 "subtype": "NVMe", 00:26:48.029 "listen_addresses": [ 00:26:48.029 { 00:26:48.029 "trtype": "RDMA", 00:26:48.029 "adrfam": "IPv4", 00:26:48.029 "traddr": "192.168.100.8", 00:26:48.029 "trsvcid": "4420" 00:26:48.029 } 00:26:48.029 ], 00:26:48.029 "allow_any_host": true, 00:26:48.029 "hosts": [], 00:26:48.029 "serial_number": "SPDK00000000000001", 00:26:48.029 "model_number": "SPDK bdev Controller", 00:26:48.029 "max_namespaces": 2, 00:26:48.029 "min_cntlid": 1, 00:26:48.029 "max_cntlid": 65519, 00:26:48.029 "namespaces": [ 00:26:48.029 { 00:26:48.029 "nsid": 1, 00:26:48.029 "bdev_name": "Malloc0", 00:26:48.029 "name": "Malloc0", 00:26:48.029 "nguid": "92F055382410412DA8EA0C4746AA747B", 00:26:48.029 "uuid": "92f05538-2410-412d-a8ea-0c4746aa747b" 00:26:48.029 } 00:26:48.029 ] 00:26:48.029 } 00:26:48.029 ] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=408795 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 Malloc1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 [ 00:26:48.029 { 00:26:48.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:48.029 "subtype": "Discovery", 00:26:48.029 "listen_addresses": [], 00:26:48.029 "allow_any_host": true, 00:26:48.029 "hosts": [] 00:26:48.029 }, 00:26:48.029 { 00:26:48.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.029 "subtype": "NVMe", 00:26:48.029 "listen_addresses": [ 00:26:48.029 { 00:26:48.029 "trtype": "RDMA", 00:26:48.029 "adrfam": "IPv4", 00:26:48.029 "traddr": "192.168.100.8", 00:26:48.029 "trsvcid": "4420" 00:26:48.029 } 00:26:48.029 ], 00:26:48.029 "allow_any_host": true, 00:26:48.029 "hosts": [], 00:26:48.029 "serial_number": "SPDK00000000000001", 00:26:48.029 "model_number": "SPDK bdev Controller", 00:26:48.029 "max_namespaces": 2, 00:26:48.029 "min_cntlid": 1, 00:26:48.029 "max_cntlid": 65519, 00:26:48.029 "namespaces": [ 00:26:48.029 { 00:26:48.029 "nsid": 1, 00:26:48.029 "bdev_name": "Malloc0", 00:26:48.029 "name": "Malloc0", 00:26:48.029 "nguid": "92F055382410412DA8EA0C4746AA747B", 00:26:48.029 "uuid": "92f05538-2410-412d-a8ea-0c4746aa747b" 00:26:48.029 }, 00:26:48.029 { 00:26:48.029 "nsid": 2, 00:26:48.029 "bdev_name": "Malloc1", 00:26:48.029 "name": "Malloc1", 00:26:48.029 "nguid": "B9218BE0E3154017AD2AB8A235E17C81", 00:26:48.029 "uuid": "b9218be0-e315-4017-ad2a-b8a235e17c81" 00:26:48.029 } 00:26:48.029 ] 00:26:48.029 } 00:26:48.029 ] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 408795 00:26:48.029 Asynchronous Event Request test 00:26:48.029 Attaching to 192.168.100.8 00:26:48.029 Attached to 192.168.100.8 00:26:48.029 Registering asynchronous event callbacks... 00:26:48.029 Starting namespace attribute notice tests for all controllers... 00:26:48.029 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:48.029 aer_cb - Changed Namespace 00:26:48.029 Cleaning up... 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:48.029 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.030 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:48.289 rmmod nvme_rdma 00:26:48.289 rmmod nvme_fabrics 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 408550 ']' 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 408550 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 408550 ']' 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 408550 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408550 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408550' 00:26:48.289 killing process with pid 408550 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 408550 00:26:48.289 04:43:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 408550 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:48.549 00:26:48.549 real 0m8.726s 00:26:48.549 user 0m6.357s 00:26:48.549 sys 0m6.070s 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.549 ************************************ 00:26:48.549 END TEST nvmf_aer 00:26:48.549 ************************************ 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.549 ************************************ 00:26:48.549 START TEST nvmf_async_init 00:26:48.549 ************************************ 00:26:48.549 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:48.809 * Looking for test storage... 00:26:48.809 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:48.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.809 --rc genhtml_branch_coverage=1 00:26:48.809 --rc genhtml_function_coverage=1 00:26:48.809 --rc genhtml_legend=1 00:26:48.809 --rc geninfo_all_blocks=1 00:26:48.809 --rc geninfo_unexecuted_blocks=1 00:26:48.809 00:26:48.809 ' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:48.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.809 --rc genhtml_branch_coverage=1 00:26:48.809 --rc genhtml_function_coverage=1 00:26:48.809 --rc genhtml_legend=1 00:26:48.809 --rc geninfo_all_blocks=1 00:26:48.809 --rc geninfo_unexecuted_blocks=1 00:26:48.809 00:26:48.809 ' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:48.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.809 --rc genhtml_branch_coverage=1 00:26:48.809 --rc genhtml_function_coverage=1 00:26:48.809 --rc genhtml_legend=1 00:26:48.809 --rc geninfo_all_blocks=1 00:26:48.809 --rc geninfo_unexecuted_blocks=1 00:26:48.809 00:26:48.809 ' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:48.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.809 --rc genhtml_branch_coverage=1 00:26:48.809 --rc genhtml_function_coverage=1 00:26:48.809 --rc genhtml_legend=1 00:26:48.809 --rc geninfo_all_blocks=1 00:26:48.809 --rc geninfo_unexecuted_blocks=1 00:26:48.809 00:26:48.809 ' 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.809 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.810 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b0e3b64d6484e41bcf34f3388fa673f 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.810 04:43:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:56.939 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:56.939 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:56.939 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:56.939 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:56.939 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:56.940 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:56.940 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:56.940 altname enp217s0f0np0 00:26:56.940 altname ens818f0np0 00:26:56.940 inet 192.168.100.8/24 scope global mlx_0_0 00:26:56.940 valid_lft forever preferred_lft forever 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:56.940 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:56.940 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:56.940 altname enp217s0f1np1 00:26:56.940 altname ens818f1np1 00:26:56.940 inet 192.168.100.9/24 scope global mlx_0_1 00:26:56.940 valid_lft forever preferred_lft forever 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:56.940 192.168.100.9' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:56.940 192.168.100.9' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:56.940 192.168.100.9' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=412228 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 412228 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 412228 ']' 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.940 04:43:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.940 [2024-11-17 04:43:34.854773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:56.940 [2024-11-17 04:43:34.854820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.940 [2024-11-17 04:43:34.944926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.940 [2024-11-17 04:43:34.966453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.940 [2024-11-17 04:43:34.966491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.940 [2024-11-17 04:43:34.966501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.940 [2024-11-17 04:43:34.966509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.940 [2024-11-17 04:43:34.966519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.940 [2024-11-17 04:43:34.967118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.940 [2024-11-17 04:43:35.137836] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2389c50/0x238e100) succeed. 00:26:56.940 [2024-11-17 04:43:35.146726] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x238b0b0/0x23cf7a0) succeed. 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.940 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 null0 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b0e3b64d6484e41bcf34f3388fa673f 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 [2024-11-17 04:43:35.224913] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 nvme0n1 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 [ 00:26:56.941 { 00:26:56.941 "name": "nvme0n1", 00:26:56.941 "aliases": [ 00:26:56.941 "8b0e3b64-d648-4e41-bcf3-4f3388fa673f" 00:26:56.941 ], 00:26:56.941 "product_name": "NVMe disk", 00:26:56.941 "block_size": 512, 00:26:56.941 "num_blocks": 2097152, 00:26:56.941 "uuid": "8b0e3b64-d648-4e41-bcf3-4f3388fa673f", 00:26:56.941 "numa_id": 1, 00:26:56.941 "assigned_rate_limits": { 00:26:56.941 "rw_ios_per_sec": 0, 00:26:56.941 "rw_mbytes_per_sec": 0, 00:26:56.941 "r_mbytes_per_sec": 0, 00:26:56.941 "w_mbytes_per_sec": 0 00:26:56.941 }, 00:26:56.941 "claimed": false, 00:26:56.941 "zoned": false, 00:26:56.941 "supported_io_types": { 00:26:56.941 "read": true, 00:26:56.941 "write": true, 00:26:56.941 "unmap": false, 00:26:56.941 "flush": true, 00:26:56.941 "reset": true, 00:26:56.941 "nvme_admin": true, 00:26:56.941 "nvme_io": true, 00:26:56.941 "nvme_io_md": false, 00:26:56.941 "write_zeroes": true, 00:26:56.941 "zcopy": false, 00:26:56.941 "get_zone_info": false, 00:26:56.941 "zone_management": false, 00:26:56.941 "zone_append": false, 00:26:56.941 "compare": true, 00:26:56.941 "compare_and_write": true, 00:26:56.941 "abort": true, 00:26:56.941 "seek_hole": false, 00:26:56.941 "seek_data": false, 00:26:56.941 "copy": true, 00:26:56.941 "nvme_iov_md": false 00:26:56.941 }, 00:26:56.941 "memory_domains": [ 00:26:56.941 { 00:26:56.941 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.941 "dma_device_type": 0 00:26:56.941 } 00:26:56.941 ], 00:26:56.941 "driver_specific": { 00:26:56.941 "nvme": [ 00:26:56.941 { 00:26:56.941 "trid": { 00:26:56.941 "trtype": "RDMA", 00:26:56.941 "adrfam": "IPv4", 00:26:56.941 "traddr": "192.168.100.8", 00:26:56.941 "trsvcid": "4420", 00:26:56.941 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.941 }, 00:26:56.941 "ctrlr_data": { 00:26:56.941 "cntlid": 1, 00:26:56.941 "vendor_id": "0x8086", 00:26:56.941 "model_number": "SPDK bdev Controller", 00:26:56.941 "serial_number": "00000000000000000000", 00:26:56.941 "firmware_revision": "25.01", 00:26:56.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.941 "oacs": { 00:26:56.941 "security": 0, 00:26:56.941 "format": 0, 00:26:56.941 "firmware": 0, 00:26:56.941 "ns_manage": 0 00:26:56.941 }, 00:26:56.941 "multi_ctrlr": true, 00:26:56.941 "ana_reporting": false 00:26:56.941 }, 00:26:56.941 "vs": { 00:26:56.941 "nvme_version": "1.3" 00:26:56.941 }, 00:26:56.941 "ns_data": { 00:26:56.941 "id": 1, 00:26:56.941 "can_share": true 00:26:56.941 } 00:26:56.941 } 00:26:56.941 ], 00:26:56.941 "mp_policy": "active_passive" 00:26:56.941 } 00:26:56.941 } 00:26:56.941 ] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 [2024-11-17 04:43:35.349007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:56.941 [2024-11-17 04:43:35.365245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.941 [2024-11-17 04:43:35.387068] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.941 [ 00:26:56.941 { 00:26:56.941 "name": "nvme0n1", 00:26:56.941 "aliases": [ 00:26:56.941 "8b0e3b64-d648-4e41-bcf3-4f3388fa673f" 00:26:56.941 ], 00:26:56.941 "product_name": "NVMe disk", 00:26:56.941 "block_size": 512, 00:26:56.941 "num_blocks": 2097152, 00:26:56.941 "uuid": "8b0e3b64-d648-4e41-bcf3-4f3388fa673f", 00:26:56.941 "numa_id": 1, 00:26:56.941 "assigned_rate_limits": { 00:26:56.941 "rw_ios_per_sec": 0, 00:26:56.941 "rw_mbytes_per_sec": 0, 00:26:56.941 "r_mbytes_per_sec": 0, 00:26:56.941 "w_mbytes_per_sec": 0 00:26:56.941 }, 00:26:56.941 "claimed": false, 00:26:56.941 "zoned": false, 00:26:56.941 "supported_io_types": { 00:26:56.941 "read": true, 00:26:56.941 "write": true, 00:26:56.941 "unmap": false, 00:26:56.941 "flush": true, 00:26:56.941 "reset": true, 00:26:56.941 "nvme_admin": true, 00:26:56.941 "nvme_io": true, 00:26:56.941 "nvme_io_md": false, 00:26:56.941 "write_zeroes": true, 00:26:56.941 "zcopy": false, 00:26:56.941 "get_zone_info": false, 00:26:56.941 "zone_management": false, 00:26:56.941 "zone_append": false, 00:26:56.941 "compare": true, 00:26:56.941 "compare_and_write": true, 00:26:56.941 "abort": true, 00:26:56.941 "seek_hole": false, 00:26:56.941 "seek_data": false, 00:26:56.941 "copy": true, 00:26:56.941 "nvme_iov_md": false 00:26:56.941 }, 00:26:56.941 "memory_domains": [ 00:26:56.941 { 00:26:56.941 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.941 "dma_device_type": 0 00:26:56.941 } 00:26:56.941 ], 00:26:56.941 "driver_specific": { 00:26:56.941 "nvme": [ 00:26:56.941 { 00:26:56.941 "trid": { 00:26:56.941 "trtype": "RDMA", 00:26:56.941 "adrfam": "IPv4", 00:26:56.941 "traddr": "192.168.100.8", 00:26:56.941 "trsvcid": "4420", 00:26:56.941 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.941 }, 00:26:56.941 "ctrlr_data": { 00:26:56.941 "cntlid": 2, 00:26:56.941 "vendor_id": "0x8086", 00:26:56.941 "model_number": "SPDK bdev Controller", 00:26:56.941 "serial_number": "00000000000000000000", 00:26:56.941 "firmware_revision": "25.01", 00:26:56.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.941 "oacs": { 00:26:56.941 "security": 0, 00:26:56.941 "format": 0, 00:26:56.941 "firmware": 0, 00:26:56.941 "ns_manage": 0 00:26:56.941 }, 00:26:56.941 "multi_ctrlr": true, 00:26:56.941 "ana_reporting": false 00:26:56.941 }, 00:26:56.941 "vs": { 00:26:56.941 "nvme_version": "1.3" 00:26:56.941 }, 00:26:56.941 "ns_data": { 00:26:56.941 "id": 1, 00:26:56.941 "can_share": true 00:26:56.941 } 00:26:56.941 } 00:26:56.941 ], 00:26:56.941 "mp_policy": "active_passive" 00:26:56.941 } 00:26:56.941 } 00:26:56.941 ] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.941 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kuWZUypR7Y 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kuWZUypR7Y 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.kuWZUypR7Y 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 [2024-11-17 04:43:35.485964] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 [2024-11-17 04:43:35.506019] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:56.942 nvme0n1 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 [ 00:26:56.942 { 00:26:56.942 "name": "nvme0n1", 00:26:56.942 "aliases": [ 00:26:56.942 "8b0e3b64-d648-4e41-bcf3-4f3388fa673f" 00:26:56.942 ], 00:26:56.942 "product_name": "NVMe disk", 00:26:56.942 "block_size": 512, 00:26:56.942 "num_blocks": 2097152, 00:26:56.942 "uuid": "8b0e3b64-d648-4e41-bcf3-4f3388fa673f", 00:26:56.942 "numa_id": 1, 00:26:56.942 "assigned_rate_limits": { 00:26:56.942 "rw_ios_per_sec": 0, 00:26:56.942 "rw_mbytes_per_sec": 0, 00:26:56.942 "r_mbytes_per_sec": 0, 00:26:56.942 "w_mbytes_per_sec": 0 00:26:56.942 }, 00:26:56.942 "claimed": false, 00:26:56.942 "zoned": false, 00:26:56.942 "supported_io_types": { 00:26:56.942 "read": true, 00:26:56.942 "write": true, 00:26:56.942 "unmap": false, 00:26:56.942 "flush": true, 00:26:56.942 "reset": true, 00:26:56.942 "nvme_admin": true, 00:26:56.942 "nvme_io": true, 00:26:56.942 "nvme_io_md": false, 00:26:56.942 "write_zeroes": true, 00:26:56.942 "zcopy": false, 00:26:56.942 "get_zone_info": false, 00:26:56.942 "zone_management": false, 00:26:56.942 "zone_append": false, 00:26:56.942 "compare": true, 00:26:56.942 "compare_and_write": true, 00:26:56.942 "abort": true, 00:26:56.942 "seek_hole": false, 00:26:56.942 "seek_data": false, 00:26:56.942 "copy": true, 00:26:56.942 "nvme_iov_md": false 00:26:56.942 }, 00:26:56.942 "memory_domains": [ 00:26:56.942 { 00:26:56.942 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.942 "dma_device_type": 0 00:26:56.942 } 00:26:56.942 ], 00:26:56.942 "driver_specific": { 00:26:56.942 "nvme": [ 00:26:56.942 { 00:26:56.942 "trid": { 00:26:56.942 "trtype": "RDMA", 00:26:56.942 "adrfam": "IPv4", 00:26:56.942 "traddr": "192.168.100.8", 00:26:56.942 "trsvcid": "4421", 00:26:56.942 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.942 }, 00:26:56.942 "ctrlr_data": { 00:26:56.942 "cntlid": 3, 00:26:56.942 "vendor_id": "0x8086", 00:26:56.942 "model_number": "SPDK bdev Controller", 00:26:56.942 "serial_number": "00000000000000000000", 00:26:56.942 "firmware_revision": "25.01", 00:26:56.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.942 "oacs": { 00:26:56.942 "security": 0, 00:26:56.942 "format": 0, 00:26:56.942 "firmware": 0, 00:26:56.942 "ns_manage": 0 00:26:56.942 }, 00:26:56.942 "multi_ctrlr": true, 00:26:56.942 "ana_reporting": false 00:26:56.942 }, 00:26:56.942 "vs": { 00:26:56.942 "nvme_version": "1.3" 00:26:56.942 }, 00:26:56.942 "ns_data": { 00:26:56.942 "id": 1, 00:26:56.942 "can_share": true 00:26:56.942 } 00:26:56.942 } 00:26:56.942 ], 00:26:56.942 "mp_policy": "active_passive" 00:26:56.942 } 00:26:56.942 } 00:26:56.942 ] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.kuWZUypR7Y 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:56.942 rmmod nvme_rdma 00:26:56.942 rmmod nvme_fabrics 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 412228 ']' 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 412228 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 412228 ']' 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 412228 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.942 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 412228 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 412228' 00:26:57.200 killing process with pid 412228 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 412228 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 412228 00:26:57.200 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:57.201 00:26:57.201 real 0m8.675s 00:26:57.201 user 0m3.317s 00:26:57.201 sys 0m6.001s 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.201 ************************************ 00:26:57.201 END TEST nvmf_async_init 00:26:57.201 ************************************ 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.201 04:43:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 ************************************ 00:26:57.458 START TEST dma 00:26:57.458 ************************************ 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:57.458 * Looking for test storage... 00:26:57.458 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.458 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.459 --rc genhtml_branch_coverage=1 00:26:57.459 --rc genhtml_function_coverage=1 00:26:57.459 --rc genhtml_legend=1 00:26:57.459 --rc geninfo_all_blocks=1 00:26:57.459 --rc geninfo_unexecuted_blocks=1 00:26:57.459 00:26:57.459 ' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.459 --rc genhtml_branch_coverage=1 00:26:57.459 --rc genhtml_function_coverage=1 00:26:57.459 --rc genhtml_legend=1 00:26:57.459 --rc geninfo_all_blocks=1 00:26:57.459 --rc geninfo_unexecuted_blocks=1 00:26:57.459 00:26:57.459 ' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.459 --rc genhtml_branch_coverage=1 00:26:57.459 --rc genhtml_function_coverage=1 00:26:57.459 --rc genhtml_legend=1 00:26:57.459 --rc geninfo_all_blocks=1 00:26:57.459 --rc geninfo_unexecuted_blocks=1 00:26:57.459 00:26:57.459 ' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.459 --rc genhtml_branch_coverage=1 00:26:57.459 --rc genhtml_function_coverage=1 00:26:57.459 --rc genhtml_legend=1 00:26:57.459 --rc geninfo_all_blocks=1 00:26:57.459 --rc geninfo_unexecuted_blocks=1 00:26:57.459 00:26:57.459 ' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.459 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.459 04:43:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:27:05.583 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:05.584 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:05.584 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:05.584 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:05.584 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:05.584 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.584 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:05.584 altname enp217s0f0np0 00:27:05.584 altname ens818f0np0 00:27:05.584 inet 192.168.100.8/24 scope global mlx_0_0 00:27:05.584 valid_lft forever preferred_lft forever 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:05.584 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.584 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:05.584 altname enp217s0f1np1 00:27:05.584 altname ens818f1np1 00:27:05.584 inet 192.168.100.9/24 scope global mlx_0_1 00:27:05.584 valid_lft forever preferred_lft forever 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.584 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:05.585 192.168.100.9' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:05.585 192.168.100.9' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:05.585 192.168.100.9' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=415685 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 415685 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 415685 ']' 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [2024-11-17 04:43:43.628768] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:05.585 [2024-11-17 04:43:43.628830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.585 [2024-11-17 04:43:43.723152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.585 [2024-11-17 04:43:43.744536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.585 [2024-11-17 04:43:43.744574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.585 [2024-11-17 04:43:43.744583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.585 [2024-11-17 04:43:43.744592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.585 [2024-11-17 04:43:43.744615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.585 [2024-11-17 04:43:43.745901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.585 [2024-11-17 04:43:43.745902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [2024-11-17 04:43:43.904506] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa72590/0xa76a40) succeed. 00:27:05.585 [2024-11-17 04:43:43.913457] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa73a90/0xab80e0) succeed. 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 04:43:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 Malloc0 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [2024-11-17 04:43:44.075036] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:05.585 { 00:27:05.585 "params": { 00:27:05.585 "name": "Nvme$subsystem", 00:27:05.585 "trtype": "$TEST_TRANSPORT", 00:27:05.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.585 "adrfam": "ipv4", 00:27:05.585 "trsvcid": "$NVMF_PORT", 00:27:05.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.585 "hdgst": ${hdgst:-false}, 00:27:05.585 "ddgst": ${ddgst:-false} 00:27:05.585 }, 00:27:05.585 "method": "bdev_nvme_attach_controller" 00:27:05.585 } 00:27:05.585 EOF 00:27:05.585 )") 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:27:05.585 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:27:05.586 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:27:05.586 04:43:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:05.586 "params": { 00:27:05.586 "name": "Nvme0", 00:27:05.586 "trtype": "rdma", 00:27:05.586 "traddr": "192.168.100.8", 00:27:05.586 "adrfam": "ipv4", 00:27:05.586 "trsvcid": "4420", 00:27:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.586 "hdgst": false, 00:27:05.586 "ddgst": false 00:27:05.586 }, 00:27:05.586 "method": "bdev_nvme_attach_controller" 00:27:05.586 }' 00:27:05.586 [2024-11-17 04:43:44.128155] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:05.586 [2024-11-17 04:43:44.128202] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415793 ] 00:27:05.586 [2024-11-17 04:43:44.221099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.586 [2024-11-17 04:43:44.244569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.586 [2024-11-17 04:43:44.244569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.860 bdev Nvme0n1 reports 1 memory domains 00:27:10.860 bdev Nvme0n1 supports RDMA memory domain 00:27:10.860 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:10.860 ========================================================================== 00:27:10.860 Latency [us] 00:27:10.860 IOPS MiB/s Average min max 00:27:10.860 Core 2: 21670.45 84.65 737.71 230.83 8561.41 00:27:10.860 Core 3: 21584.66 84.32 740.62 258.35 8488.63 00:27:10.860 ========================================================================== 00:27:10.860 Total : 43255.12 168.97 739.16 230.83 8561.41 00:27:10.860 00:27:10.860 Total operations: 216303, translate 216303 pull_push 0 memzero 0 00:27:10.860 04:43:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:27:10.860 04:43:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:27:10.860 04:43:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:27:10.860 [2024-11-17 04:43:49.649200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:10.860 [2024-11-17 04:43:49.649254] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416767 ] 00:27:11.119 [2024-11-17 04:43:49.742001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:11.119 [2024-11-17 04:43:49.764805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.119 [2024-11-17 04:43:49.764806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.395 bdev Malloc0 reports 2 memory domains 00:27:16.395 bdev Malloc0 doesn't support RDMA memory domain 00:27:16.395 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:16.395 ========================================================================== 00:27:16.395 Latency [us] 00:27:16.395 IOPS MiB/s Average min max 00:27:16.395 Core 2: 14389.27 56.21 1111.24 471.02 1426.70 00:27:16.395 Core 3: 14585.20 56.97 1096.30 457.37 1788.79 00:27:16.395 ========================================================================== 00:27:16.395 Total : 28974.47 113.18 1103.72 457.37 1788.79 00:27:16.395 00:27:16.395 Total operations: 144926, translate 0 pull_push 579704 memzero 0 00:27:16.395 04:43:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:27:16.395 04:43:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:27:16.395 04:43:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:16.395 04:43:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:16.395 Ignoring -M option 00:27:16.395 [2024-11-17 04:43:55.078462] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:16.395 [2024-11-17 04:43:55.078519] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417574 ] 00:27:16.395 [2024-11-17 04:43:55.170042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:16.395 [2024-11-17 04:43:55.191014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.395 [2024-11-17 04:43:55.191015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.975 bdev 61598b7b-d1c5-4bc1-b21d-5590b3b1542f reports 1 memory domains 00:27:22.975 bdev 61598b7b-d1c5-4bc1-b21d-5590b3b1542f supports RDMA memory domain 00:27:22.975 Initialization complete, running randread IO for 5 sec on 2 cores 00:27:22.975 ========================================================================== 00:27:22.975 Latency [us] 00:27:22.975 IOPS MiB/s Average min max 00:27:22.975 Core 2: 67738.95 264.61 235.24 88.81 3461.90 00:27:22.975 Core 3: 69620.28 271.95 228.87 84.09 2011.14 00:27:22.975 ========================================================================== 00:27:22.975 Total : 137359.23 536.56 232.01 84.09 3461.90 00:27:22.975 00:27:22.975 Total operations: 686896, translate 0 pull_push 0 memzero 686896 00:27:22.975 04:44:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:27:22.975 [2024-11-17 04:44:00.724601] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:24.354 Initializing NVMe Controllers 00:27:24.354 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:27:24.354 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:24.354 Initialization complete. Launching workers. 00:27:24.354 ======================================================== 00:27:24.354 Latency(us) 00:27:24.354 Device Information : IOPS MiB/s Average min max 00:27:24.354 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.88 7.85 7932.53 4986.13 10005.76 00:27:24.354 ======================================================== 00:27:24.354 Total : 2008.88 7.85 7932.53 4986.13 10005.76 00:27:24.354 00:27:24.354 04:44:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:27:24.354 04:44:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:27:24.354 04:44:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:24.354 04:44:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:24.354 [2024-11-17 04:44:03.061518] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:24.354 [2024-11-17 04:44:03.061573] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418901 ] 00:27:24.354 [2024-11-17 04:44:03.154181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:24.354 [2024-11-17 04:44:03.177478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.354 [2024-11-17 04:44:03.177478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.928 bdev 8a754ac2-6e88-4da9-b3b3-6d348c786149 reports 1 memory domains 00:27:30.928 bdev 8a754ac2-6e88-4da9-b3b3-6d348c786149 supports RDMA memory domain 00:27:30.928 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:30.928 ========================================================================== 00:27:30.928 Latency [us] 00:27:30.928 IOPS MiB/s Average min max 00:27:30.928 Core 2: 19134.29 74.74 835.55 41.27 13507.34 00:27:30.928 Core 3: 19408.25 75.81 823.72 11.34 13070.28 00:27:30.928 ========================================================================== 00:27:30.928 Total : 38542.54 150.56 829.59 11.34 13507.34 00:27:30.928 00:27:30.928 Total operations: 192738, translate 192624 pull_push 0 memzero 114 00:27:30.928 04:44:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:30.928 04:44:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:30.928 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:30.928 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:30.929 rmmod nvme_rdma 00:27:30.929 rmmod nvme_fabrics 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 415685 ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 415685 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 415685 ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 415685 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415685 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415685' 00:27:30.929 killing process with pid 415685 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 415685 00:27:30.929 04:44:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 415685 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:30.929 00:27:30.929 real 0m32.967s 00:27:30.929 user 1m34.993s 00:27:30.929 sys 0m6.909s 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:30.929 ************************************ 00:27:30.929 END TEST dma 00:27:30.929 ************************************ 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.929 ************************************ 00:27:30.929 START TEST nvmf_identify 00:27:30.929 ************************************ 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:30.929 * Looking for test storage... 00:27:30.929 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.929 --rc genhtml_branch_coverage=1 00:27:30.929 --rc genhtml_function_coverage=1 00:27:30.929 --rc genhtml_legend=1 00:27:30.929 --rc geninfo_all_blocks=1 00:27:30.929 --rc geninfo_unexecuted_blocks=1 00:27:30.929 00:27:30.929 ' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.929 --rc genhtml_branch_coverage=1 00:27:30.929 --rc genhtml_function_coverage=1 00:27:30.929 --rc genhtml_legend=1 00:27:30.929 --rc geninfo_all_blocks=1 00:27:30.929 --rc geninfo_unexecuted_blocks=1 00:27:30.929 00:27:30.929 ' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.929 --rc genhtml_branch_coverage=1 00:27:30.929 --rc genhtml_function_coverage=1 00:27:30.929 --rc genhtml_legend=1 00:27:30.929 --rc geninfo_all_blocks=1 00:27:30.929 --rc geninfo_unexecuted_blocks=1 00:27:30.929 00:27:30.929 ' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.929 --rc genhtml_branch_coverage=1 00:27:30.929 --rc genhtml_function_coverage=1 00:27:30.929 --rc genhtml_legend=1 00:27:30.929 --rc geninfo_all_blocks=1 00:27:30.929 --rc geninfo_unexecuted_blocks=1 00:27:30.929 00:27:30.929 ' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.929 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.930 04:44:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:37.505 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:37.505 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.505 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:37.506 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:37.506 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:37.506 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:37.765 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:37.765 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:37.765 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:37.765 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:37.766 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.766 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:37.766 altname enp217s0f0np0 00:27:37.766 altname ens818f0np0 00:27:37.766 inet 192.168.100.8/24 scope global mlx_0_0 00:27:37.766 valid_lft forever preferred_lft forever 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:37.766 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.766 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:37.766 altname enp217s0f1np1 00:27:37.766 altname ens818f1np1 00:27:37.766 inet 192.168.100.9/24 scope global mlx_0_1 00:27:37.766 valid_lft forever preferred_lft forever 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:37.766 192.168.100.9' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:37.766 192.168.100.9' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:37.766 192.168.100.9' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=423257 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 423257 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 423257 ']' 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.766 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.766 [2024-11-17 04:44:16.591063] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:37.766 [2024-11-17 04:44:16.591114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.025 [2024-11-17 04:44:16.681713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.025 [2024-11-17 04:44:16.705480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.025 [2024-11-17 04:44:16.705518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.025 [2024-11-17 04:44:16.705528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.025 [2024-11-17 04:44:16.705537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.025 [2024-11-17 04:44:16.705544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.025 [2024-11-17 04:44:16.707129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.025 [2024-11-17 04:44:16.707241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.025 [2024-11-17 04:44:16.707350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.025 [2024-11-17 04:44:16.707351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.025 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.025 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:38.025 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:38.025 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.025 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.025 [2024-11-17 04:44:16.823486] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1774c50/0x1779100) succeed. 00:27:38.025 [2024-11-17 04:44:16.832725] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1776290/0x17ba7a0) succeed. 00:27:38.285 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:38.285 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.285 04:44:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 Malloc0 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 [2024-11-17 04:44:17.070752] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 [ 00:27:38.285 { 00:27:38.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:38.285 "subtype": "Discovery", 00:27:38.285 "listen_addresses": [ 00:27:38.285 { 00:27:38.285 "trtype": "RDMA", 00:27:38.285 "adrfam": "IPv4", 00:27:38.285 "traddr": "192.168.100.8", 00:27:38.285 "trsvcid": "4420" 00:27:38.285 } 00:27:38.285 ], 00:27:38.285 "allow_any_host": true, 00:27:38.285 "hosts": [] 00:27:38.285 }, 00:27:38.285 { 00:27:38.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.285 "subtype": "NVMe", 00:27:38.285 "listen_addresses": [ 00:27:38.285 { 00:27:38.285 "trtype": "RDMA", 00:27:38.285 "adrfam": "IPv4", 00:27:38.285 "traddr": "192.168.100.8", 00:27:38.285 "trsvcid": "4420" 00:27:38.285 } 00:27:38.285 ], 00:27:38.285 "allow_any_host": true, 00:27:38.285 "hosts": [], 00:27:38.285 "serial_number": "SPDK00000000000001", 00:27:38.285 "model_number": "SPDK bdev Controller", 00:27:38.285 "max_namespaces": 32, 00:27:38.285 "min_cntlid": 1, 00:27:38.285 "max_cntlid": 65519, 00:27:38.285 "namespaces": [ 00:27:38.285 { 00:27:38.285 "nsid": 1, 00:27:38.285 "bdev_name": "Malloc0", 00:27:38.285 "name": "Malloc0", 00:27:38.285 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:38.285 "eui64": "ABCDEF0123456789", 00:27:38.285 "uuid": "2f94e78f-4914-44f7-aa1d-ce386b0c0ef9" 00:27:38.285 } 00:27:38.285 ] 00:27:38.285 } 00:27:38.285 ] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.285 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:38.549 [2024-11-17 04:44:17.128042] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:38.549 [2024-11-17 04:44:17.128081] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423415 ] 00:27:38.549 [2024-11-17 04:44:17.188175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:38.549 [2024-11-17 04:44:17.188248] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:38.549 [2024-11-17 04:44:17.188267] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:38.549 [2024-11-17 04:44:17.188272] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:38.549 [2024-11-17 04:44:17.188302] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:38.549 [2024-11-17 04:44:17.206475] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:38.549 [2024-11-17 04:44:17.221061] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:38.549 [2024-11-17 04:44:17.221073] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:38.550 [2024-11-17 04:44:17.221080] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221088] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221094] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221100] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221109] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221115] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221121] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221127] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221134] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221140] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221146] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221152] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221158] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221164] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221170] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221176] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221182] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221188] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221195] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221201] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221207] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221213] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221219] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221225] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221231] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221237] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221243] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221250] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221256] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221262] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221268] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221274] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:38.550 [2024-11-17 04:44:17.221280] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:38.550 [2024-11-17 04:44:17.221284] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:38.550 [2024-11-17 04:44:17.221298] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.221312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180700 00:27:38.550 [2024-11-17 04:44:17.226937] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.226948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.226956] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.226964] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:38.550 [2024-11-17 04:44:17.226972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:38.550 [2024-11-17 04:44:17.226978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:38.550 [2024-11-17 04:44:17.226995] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227025] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:38.550 [2024-11-17 04:44:17.227043] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:38.550 [2024-11-17 04:44:17.227057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227084] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:38.550 [2024-11-17 04:44:17.227103] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227119] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227143] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227162] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227170] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227195] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227207] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:38.550 [2024-11-17 04:44:17.227213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227219] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227335] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:38.550 [2024-11-17 04:44:17.227341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227350] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227378] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:38.550 [2024-11-17 04:44:17.227396] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227404] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.550 [2024-11-17 04:44:17.227429] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.550 [2024-11-17 04:44:17.227435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:38.550 [2024-11-17 04:44:17.227441] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:38.550 [2024-11-17 04:44:17.227447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:38.550 [2024-11-17 04:44:17.227453] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.550 [2024-11-17 04:44:17.227460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:38.550 [2024-11-17 04:44:17.227472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:38.551 [2024-11-17 04:44:17.227482] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180700 00:27:38.551 [2024-11-17 04:44:17.227533] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227549] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:38.551 [2024-11-17 04:44:17.227555] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:38.551 [2024-11-17 04:44:17.227561] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:38.551 [2024-11-17 04:44:17.227568] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:38.551 [2024-11-17 04:44:17.227573] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:38.551 [2024-11-17 04:44:17.227579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:38.551 [2024-11-17 04:44:17.227585] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:38.551 [2024-11-17 04:44:17.227603] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.551 [2024-11-17 04:44:17.227632] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227646] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.551 [2024-11-17 04:44:17.227660] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.551 [2024-11-17 04:44:17.227674] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.551 [2024-11-17 04:44:17.227688] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.551 [2024-11-17 04:44:17.227700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:38.551 [2024-11-17 04:44:17.227706] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:38.551 [2024-11-17 04:44:17.227725] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.551 [2024-11-17 04:44:17.227748] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227762] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:38.551 [2024-11-17 04:44:17.227768] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:38.551 [2024-11-17 04:44:17.227774] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227783] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180700 00:27:38.551 [2024-11-17 04:44:17.227816] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227829] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227839] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:38.551 [2024-11-17 04:44:17.227863] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180700 00:27:38.551 [2024-11-17 04:44:17.227878] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.551 [2024-11-17 04:44:17.227900] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227916] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180700 00:27:38.551 [2024-11-17 04:44:17.227930] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227942] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227953] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227960] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.227965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.227976] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.227983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180700 00:27:38.551 [2024-11-17 04:44:17.227989] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.551 [2024-11-17 04:44:17.228004] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.551 [2024-11-17 04:44:17.228010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.551 [2024-11-17 04:44:17.228020] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.551 ===================================================== 00:27:38.551 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:38.551 ===================================================== 00:27:38.551 Controller Capabilities/Features 00:27:38.551 ================================ 00:27:38.551 Vendor ID: 0000 00:27:38.551 Subsystem Vendor ID: 0000 00:27:38.551 Serial Number: .................... 00:27:38.551 Model Number: ........................................ 00:27:38.551 Firmware Version: 25.01 00:27:38.551 Recommended Arb Burst: 0 00:27:38.551 IEEE OUI Identifier: 00 00 00 00:27:38.551 Multi-path I/O 00:27:38.551 May have multiple subsystem ports: No 00:27:38.551 May have multiple controllers: No 00:27:38.551 Associated with SR-IOV VF: No 00:27:38.551 Max Data Transfer Size: 131072 00:27:38.551 Max Number of Namespaces: 0 00:27:38.551 Max Number of I/O Queues: 1024 00:27:38.551 NVMe Specification Version (VS): 1.3 00:27:38.551 NVMe Specification Version (Identify): 1.3 00:27:38.551 Maximum Queue Entries: 128 00:27:38.551 Contiguous Queues Required: Yes 00:27:38.551 Arbitration Mechanisms Supported 00:27:38.551 Weighted Round Robin: Not Supported 00:27:38.551 Vendor Specific: Not Supported 00:27:38.551 Reset Timeout: 15000 ms 00:27:38.551 Doorbell Stride: 4 bytes 00:27:38.551 NVM Subsystem Reset: Not Supported 00:27:38.551 Command Sets Supported 00:27:38.551 NVM Command Set: Supported 00:27:38.551 Boot Partition: Not Supported 00:27:38.551 Memory Page Size Minimum: 4096 bytes 00:27:38.551 Memory Page Size Maximum: 4096 bytes 00:27:38.551 Persistent Memory Region: Not Supported 00:27:38.551 Optional Asynchronous Events Supported 00:27:38.551 Namespace Attribute Notices: Not Supported 00:27:38.551 Firmware Activation Notices: Not Supported 00:27:38.551 ANA Change Notices: Not Supported 00:27:38.551 PLE Aggregate Log Change Notices: Not Supported 00:27:38.551 LBA Status Info Alert Notices: Not Supported 00:27:38.551 EGE Aggregate Log Change Notices: Not Supported 00:27:38.551 Normal NVM Subsystem Shutdown event: Not Supported 00:27:38.551 Zone Descriptor Change Notices: Not Supported 00:27:38.551 Discovery Log Change Notices: Supported 00:27:38.551 Controller Attributes 00:27:38.551 128-bit Host Identifier: Not Supported 00:27:38.551 Non-Operational Permissive Mode: Not Supported 00:27:38.551 NVM Sets: Not Supported 00:27:38.551 Read Recovery Levels: Not Supported 00:27:38.551 Endurance Groups: Not Supported 00:27:38.551 Predictable Latency Mode: Not Supported 00:27:38.552 Traffic Based Keep ALive: Not Supported 00:27:38.552 Namespace Granularity: Not Supported 00:27:38.552 SQ Associations: Not Supported 00:27:38.552 UUID List: Not Supported 00:27:38.552 Multi-Domain Subsystem: Not Supported 00:27:38.552 Fixed Capacity Management: Not Supported 00:27:38.552 Variable Capacity Management: Not Supported 00:27:38.552 Delete Endurance Group: Not Supported 00:27:38.552 Delete NVM Set: Not Supported 00:27:38.552 Extended LBA Formats Supported: Not Supported 00:27:38.552 Flexible Data Placement Supported: Not Supported 00:27:38.552 00:27:38.552 Controller Memory Buffer Support 00:27:38.552 ================================ 00:27:38.552 Supported: No 00:27:38.552 00:27:38.552 Persistent Memory Region Support 00:27:38.552 ================================ 00:27:38.552 Supported: No 00:27:38.552 00:27:38.552 Admin Command Set Attributes 00:27:38.552 ============================ 00:27:38.552 Security Send/Receive: Not Supported 00:27:38.552 Format NVM: Not Supported 00:27:38.552 Firmware Activate/Download: Not Supported 00:27:38.552 Namespace Management: Not Supported 00:27:38.552 Device Self-Test: Not Supported 00:27:38.552 Directives: Not Supported 00:27:38.552 NVMe-MI: Not Supported 00:27:38.552 Virtualization Management: Not Supported 00:27:38.552 Doorbell Buffer Config: Not Supported 00:27:38.552 Get LBA Status Capability: Not Supported 00:27:38.552 Command & Feature Lockdown Capability: Not Supported 00:27:38.552 Abort Command Limit: 1 00:27:38.552 Async Event Request Limit: 4 00:27:38.552 Number of Firmware Slots: N/A 00:27:38.552 Firmware Slot 1 Read-Only: N/A 00:27:38.552 Firmware Activation Without Reset: N/A 00:27:38.552 Multiple Update Detection Support: N/A 00:27:38.552 Firmware Update Granularity: No Information Provided 00:27:38.552 Per-Namespace SMART Log: No 00:27:38.552 Asymmetric Namespace Access Log Page: Not Supported 00:27:38.552 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:38.552 Command Effects Log Page: Not Supported 00:27:38.552 Get Log Page Extended Data: Supported 00:27:38.552 Telemetry Log Pages: Not Supported 00:27:38.552 Persistent Event Log Pages: Not Supported 00:27:38.552 Supported Log Pages Log Page: May Support 00:27:38.552 Commands Supported & Effects Log Page: Not Supported 00:27:38.552 Feature Identifiers & Effects Log Page:May Support 00:27:38.552 NVMe-MI Commands & Effects Log Page: May Support 00:27:38.552 Data Area 4 for Telemetry Log: Not Supported 00:27:38.552 Error Log Page Entries Supported: 128 00:27:38.552 Keep Alive: Not Supported 00:27:38.552 00:27:38.552 NVM Command Set Attributes 00:27:38.552 ========================== 00:27:38.552 Submission Queue Entry Size 00:27:38.552 Max: 1 00:27:38.552 Min: 1 00:27:38.552 Completion Queue Entry Size 00:27:38.552 Max: 1 00:27:38.552 Min: 1 00:27:38.552 Number of Namespaces: 0 00:27:38.552 Compare Command: Not Supported 00:27:38.552 Write Uncorrectable Command: Not Supported 00:27:38.552 Dataset Management Command: Not Supported 00:27:38.552 Write Zeroes Command: Not Supported 00:27:38.552 Set Features Save Field: Not Supported 00:27:38.552 Reservations: Not Supported 00:27:38.552 Timestamp: Not Supported 00:27:38.552 Copy: Not Supported 00:27:38.552 Volatile Write Cache: Not Present 00:27:38.552 Atomic Write Unit (Normal): 1 00:27:38.552 Atomic Write Unit (PFail): 1 00:27:38.552 Atomic Compare & Write Unit: 1 00:27:38.552 Fused Compare & Write: Supported 00:27:38.552 Scatter-Gather List 00:27:38.552 SGL Command Set: Supported 00:27:38.552 SGL Keyed: Supported 00:27:38.552 SGL Bit Bucket Descriptor: Not Supported 00:27:38.552 SGL Metadata Pointer: Not Supported 00:27:38.552 Oversized SGL: Not Supported 00:27:38.552 SGL Metadata Address: Not Supported 00:27:38.552 SGL Offset: Supported 00:27:38.552 Transport SGL Data Block: Not Supported 00:27:38.552 Replay Protected Memory Block: Not Supported 00:27:38.552 00:27:38.552 Firmware Slot Information 00:27:38.552 ========================= 00:27:38.552 Active slot: 0 00:27:38.552 00:27:38.552 00:27:38.552 Error Log 00:27:38.552 ========= 00:27:38.552 00:27:38.552 Active Namespaces 00:27:38.552 ================= 00:27:38.552 Discovery Log Page 00:27:38.552 ================== 00:27:38.552 Generation Counter: 2 00:27:38.552 Number of Records: 2 00:27:38.552 Record Format: 0 00:27:38.552 00:27:38.552 Discovery Log Entry 0 00:27:38.552 ---------------------- 00:27:38.552 Transport Type: 1 (RDMA) 00:27:38.552 Address Family: 1 (IPv4) 00:27:38.552 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:38.552 Entry Flags: 00:27:38.552 Duplicate Returned Information: 1 00:27:38.552 Explicit Persistent Connection Support for Discovery: 1 00:27:38.552 Transport Requirements: 00:27:38.552 Secure Channel: Not Required 00:27:38.552 Port ID: 0 (0x0000) 00:27:38.552 Controller ID: 65535 (0xffff) 00:27:38.552 Admin Max SQ Size: 128 00:27:38.552 Transport Service Identifier: 4420 00:27:38.552 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:38.552 Transport Address: 192.168.100.8 00:27:38.552 Transport Specific Address Subtype - RDMA 00:27:38.552 RDMA QP Service Type: 1 (Reliable Connected) 00:27:38.552 RDMA Provider Type: 1 (No provider specified) 00:27:38.552 RDMA CM Service: 1 (RDMA_CM) 00:27:38.552 Discovery Log Entry 1 00:27:38.552 ---------------------- 00:27:38.552 Transport Type: 1 (RDMA) 00:27:38.552 Address Family: 1 (IPv4) 00:27:38.552 Subsystem Type: 2 (NVM Subsystem) 00:27:38.552 Entry Flags: 00:27:38.552 Duplicate Returned Information: 0 00:27:38.552 Explicit Persistent Connection Support for Discovery: 0 00:27:38.552 Transport Requirements: 00:27:38.552 Secure Channel: Not Required 00:27:38.552 Port ID: 0 (0x0000) 00:27:38.552 Controller ID: 65535 (0xffff) 00:27:38.552 Admin Max SQ Size: [2024-11-17 04:44:17.228089] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:38.552 [2024-11-17 04:44:17.228099] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 817 doesn't match qid 00:27:38.552 [2024-11-17 04:44:17.228112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:a9634eb0 sqhd:b320 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228119] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 817 doesn't match qid 00:27:38.552 [2024-11-17 04:44:17.228127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:a9634eb0 sqhd:b320 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228133] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 817 doesn't match qid 00:27:38.552 [2024-11-17 04:44:17.228140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:a9634eb0 sqhd:b320 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228147] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 817 doesn't match qid 00:27:38.552 [2024-11-17 04:44:17.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:a9634eb0 sqhd:b320 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228163] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.552 [2024-11-17 04:44:17.228186] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.552 [2024-11-17 04:44:17.228192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228200] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.552 [2024-11-17 04:44:17.228214] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228232] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.552 [2024-11-17 04:44:17.228238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228244] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:38.552 [2024-11-17 04:44:17.228250] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:38.552 [2024-11-17 04:44:17.228256] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228265] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.552 [2024-11-17 04:44:17.228290] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.552 [2024-11-17 04:44:17.228296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:38.552 [2024-11-17 04:44:17.228302] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.552 [2024-11-17 04:44:17.228311] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228335] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228348] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228356] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228384] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228396] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228404] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228428] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228440] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228449] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228477] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228489] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228498] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228523] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228535] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228544] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228568] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228580] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228591] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228618] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228630] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228639] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228663] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228675] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228683] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228711] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228722] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228731] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228758] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228770] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228779] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228804] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228816] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228825] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228856] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228869] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228878] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228907] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228919] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228928] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.228957] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.228963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.228969] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228978] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.228985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.229001] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.229007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.229013] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229022] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.229047] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.229052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.229059] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229067] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.229093] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.229098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:38.553 [2024-11-17 04:44:17.229104] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229113] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.553 [2024-11-17 04:44:17.229121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.553 [2024-11-17 04:44:17.229138] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.553 [2024-11-17 04:44:17.229144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229152] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229161] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229190] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229202] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229210] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229239] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229251] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229260] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229283] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229295] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229304] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229335] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229346] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229355] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229382] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229394] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229403] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229428] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229442] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229450] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229477] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229489] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229498] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229529] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229541] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229549] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229578] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229590] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229599] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229626] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229638] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229646] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229670] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229682] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229690] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229719] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229732] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229741] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229768] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229780] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229789] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229819] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229831] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229840] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.554 [2024-11-17 04:44:17.229848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.554 [2024-11-17 04:44:17.229867] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.554 [2024-11-17 04:44:17.229873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:38.554 [2024-11-17 04:44:17.229879] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229888] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.229913] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.229918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.229925] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229937] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.229961] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.229967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.229973] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229982] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.229989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230007] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230019] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230027] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230051] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230063] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230071] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230100] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230112] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230121] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230150] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230162] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230170] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230192] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230203] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230212] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230237] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230249] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230258] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230280] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230292] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230301] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230328] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230340] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230372] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230383] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230392] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230419] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230431] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230440] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230463] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230475] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230483] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230512] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230524] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230533] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230560] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230572] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230580] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230609] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230621] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230630] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230653] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230665] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230674] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.555 [2024-11-17 04:44:17.230681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.555 [2024-11-17 04:44:17.230701] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.555 [2024-11-17 04:44:17.230706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:38.555 [2024-11-17 04:44:17.230713] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230721] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.230750] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.230756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.230762] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230771] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.230804] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.230809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.230815] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230824] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.230849] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.230855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.230861] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230870] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.230895] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.230900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.230907] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230915] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.230923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.234936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.234943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.234950] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.234959] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.234967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.556 [2024-11-17 04:44:17.234985] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.556 [2024-11-17 04:44:17.234990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000b p:0 m:0 dnr:0 00:27:38.556 [2024-11-17 04:44:17.234997] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.556 [2024-11-17 04:44:17.235004] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:27:38.556 128 00:27:38.556 Transport Service Identifier: 4420 00:27:38.556 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:38.556 Transport Address: 192.168.100.8 00:27:38.556 Transport Specific Address Subtype - RDMA 00:27:38.556 RDMA QP Service Type: 1 (Reliable Connected) 00:27:38.556 RDMA Provider Type: 1 (No provider specified) 00:27:38.556 RDMA CM Service: 1 (RDMA_CM) 00:27:38.556 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:38.556 [2024-11-17 04:44:17.308336] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:38.556 [2024-11-17 04:44:17.308392] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423417 ] 00:27:38.556 [2024-11-17 04:44:17.369152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:38.556 [2024-11-17 04:44:17.369224] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:38.556 [2024-11-17 04:44:17.369239] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:38.556 [2024-11-17 04:44:17.369244] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:38.556 [2024-11-17 04:44:17.369268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:38.821 [2024-11-17 04:44:17.387481] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:38.821 [2024-11-17 04:44:17.397601] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:38.821 [2024-11-17 04:44:17.397611] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:38.821 [2024-11-17 04:44:17.397617] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397625] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397631] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397638] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397644] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397650] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397656] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397662] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397669] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397675] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397681] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397687] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397693] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397699] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397706] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397712] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397718] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397724] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397730] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397736] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397743] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397749] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397755] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397764] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397770] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397776] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397783] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397789] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397795] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397801] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397807] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397813] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:38.821 [2024-11-17 04:44:17.397818] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:38.821 [2024-11-17 04:44:17.397823] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:38.821 [2024-11-17 04:44:17.397835] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.397847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180700 00:27:38.821 [2024-11-17 04:44:17.402938] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.402946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.402954] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.402961] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:38.821 [2024-11-17 04:44:17.402968] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:38.821 [2024-11-17 04:44:17.402974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:38.821 [2024-11-17 04:44:17.402986] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.402994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403015] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.403021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.403028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:38.821 [2024-11-17 04:44:17.403034] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:38.821 [2024-11-17 04:44:17.403048] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403072] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.403078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.403086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:38.821 [2024-11-17 04:44:17.403093] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403107] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403137] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.403143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.403149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403155] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403163] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403193] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.403199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.403205] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:38.821 [2024-11-17 04:44:17.403211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403217] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403332] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:38.821 [2024-11-17 04:44:17.403339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403377] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.821 [2024-11-17 04:44:17.403383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.821 [2024-11-17 04:44:17.403389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:38.821 [2024-11-17 04:44:17.403395] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403403] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.821 [2024-11-17 04:44:17.403411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.821 [2024-11-17 04:44:17.403427] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403441] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:38.822 [2024-11-17 04:44:17.403447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403452] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:38.822 [2024-11-17 04:44:17.403468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403477] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180700 00:27:38.822 [2024-11-17 04:44:17.403522] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403537] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:38.822 [2024-11-17 04:44:17.403542] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:38.822 [2024-11-17 04:44:17.403548] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:38.822 [2024-11-17 04:44:17.403553] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:38.822 [2024-11-17 04:44:17.403559] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:38.822 [2024-11-17 04:44:17.403565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403588] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.822 [2024-11-17 04:44:17.403614] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403628] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.822 [2024-11-17 04:44:17.403642] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.822 [2024-11-17 04:44:17.403656] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.822 [2024-11-17 04:44:17.403673] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.822 [2024-11-17 04:44:17.403686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403692] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403709] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.822 [2024-11-17 04:44:17.403739] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403751] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:38.822 [2024-11-17 04:44:17.403757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403763] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403787] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.822 [2024-11-17 04:44:17.403820] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403883] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403899] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180700 00:27:38.822 [2024-11-17 04:44:17.403936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.403942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.403954] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:38.822 [2024-11-17 04:44:17.403968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403974] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.403990] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.403998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180700 00:27:38.822 [2024-11-17 04:44:17.404027] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.404033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.404045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404051] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.404059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404067] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.404074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180700 00:27:38.822 [2024-11-17 04:44:17.404096] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.822 [2024-11-17 04:44:17.404102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.822 [2024-11-17 04:44:17.404111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404117] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.404124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404158] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:38.822 [2024-11-17 04:44:17.404164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:38.822 [2024-11-17 04:44:17.404170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:38.822 [2024-11-17 04:44:17.404184] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.404191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.822 [2024-11-17 04:44:17.404201] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.822 [2024-11-17 04:44:17.404208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.823 [2024-11-17 04:44:17.404218] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404230] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404237] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404248] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404257] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.823 [2024-11-17 04:44:17.404279] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404291] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404300] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.823 [2024-11-17 04:44:17.404324] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404336] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404345] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.823 [2024-11-17 04:44:17.404376] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404387] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404400] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180700 00:27:38.823 [2024-11-17 04:44:17.404417] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180700 00:27:38.823 [2024-11-17 04:44:17.404434] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180700 00:27:38.823 [2024-11-17 04:44:17.404450] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180700 00:27:38.823 [2024-11-17 04:44:17.404466] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404483] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404489] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404507] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404513] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404526] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.823 [2024-11-17 04:44:17.404532] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.823 [2024-11-17 04:44:17.404537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.823 [2024-11-17 04:44:17.404546] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.823 ===================================================== 00:27:38.823 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.823 ===================================================== 00:27:38.823 Controller Capabilities/Features 00:27:38.823 ================================ 00:27:38.823 Vendor ID: 8086 00:27:38.823 Subsystem Vendor ID: 8086 00:27:38.823 Serial Number: SPDK00000000000001 00:27:38.823 Model Number: SPDK bdev Controller 00:27:38.823 Firmware Version: 25.01 00:27:38.823 Recommended Arb Burst: 6 00:27:38.823 IEEE OUI Identifier: e4 d2 5c 00:27:38.823 Multi-path I/O 00:27:38.823 May have multiple subsystem ports: Yes 00:27:38.823 May have multiple controllers: Yes 00:27:38.823 Associated with SR-IOV VF: No 00:27:38.823 Max Data Transfer Size: 131072 00:27:38.823 Max Number of Namespaces: 32 00:27:38.823 Max Number of I/O Queues: 127 00:27:38.823 NVMe Specification Version (VS): 1.3 00:27:38.823 NVMe Specification Version (Identify): 1.3 00:27:38.823 Maximum Queue Entries: 128 00:27:38.823 Contiguous Queues Required: Yes 00:27:38.823 Arbitration Mechanisms Supported 00:27:38.823 Weighted Round Robin: Not Supported 00:27:38.823 Vendor Specific: Not Supported 00:27:38.823 Reset Timeout: 15000 ms 00:27:38.823 Doorbell Stride: 4 bytes 00:27:38.823 NVM Subsystem Reset: Not Supported 00:27:38.823 Command Sets Supported 00:27:38.823 NVM Command Set: Supported 00:27:38.823 Boot Partition: Not Supported 00:27:38.823 Memory Page Size Minimum: 4096 bytes 00:27:38.823 Memory Page Size Maximum: 4096 bytes 00:27:38.823 Persistent Memory Region: Not Supported 00:27:38.823 Optional Asynchronous Events Supported 00:27:38.823 Namespace Attribute Notices: Supported 00:27:38.823 Firmware Activation Notices: Not Supported 00:27:38.823 ANA Change Notices: Not Supported 00:27:38.823 PLE Aggregate Log Change Notices: Not Supported 00:27:38.823 LBA Status Info Alert Notices: Not Supported 00:27:38.823 EGE Aggregate Log Change Notices: Not Supported 00:27:38.823 Normal NVM Subsystem Shutdown event: Not Supported 00:27:38.823 Zone Descriptor Change Notices: Not Supported 00:27:38.823 Discovery Log Change Notices: Not Supported 00:27:38.823 Controller Attributes 00:27:38.823 128-bit Host Identifier: Supported 00:27:38.823 Non-Operational Permissive Mode: Not Supported 00:27:38.823 NVM Sets: Not Supported 00:27:38.823 Read Recovery Levels: Not Supported 00:27:38.823 Endurance Groups: Not Supported 00:27:38.823 Predictable Latency Mode: Not Supported 00:27:38.823 Traffic Based Keep ALive: Not Supported 00:27:38.823 Namespace Granularity: Not Supported 00:27:38.823 SQ Associations: Not Supported 00:27:38.823 UUID List: Not Supported 00:27:38.823 Multi-Domain Subsystem: Not Supported 00:27:38.823 Fixed Capacity Management: Not Supported 00:27:38.823 Variable Capacity Management: Not Supported 00:27:38.823 Delete Endurance Group: Not Supported 00:27:38.823 Delete NVM Set: Not Supported 00:27:38.823 Extended LBA Formats Supported: Not Supported 00:27:38.823 Flexible Data Placement Supported: Not Supported 00:27:38.823 00:27:38.823 Controller Memory Buffer Support 00:27:38.823 ================================ 00:27:38.823 Supported: No 00:27:38.823 00:27:38.823 Persistent Memory Region Support 00:27:38.823 ================================ 00:27:38.823 Supported: No 00:27:38.823 00:27:38.823 Admin Command Set Attributes 00:27:38.823 ============================ 00:27:38.823 Security Send/Receive: Not Supported 00:27:38.823 Format NVM: Not Supported 00:27:38.823 Firmware Activate/Download: Not Supported 00:27:38.823 Namespace Management: Not Supported 00:27:38.823 Device Self-Test: Not Supported 00:27:38.823 Directives: Not Supported 00:27:38.823 NVMe-MI: Not Supported 00:27:38.823 Virtualization Management: Not Supported 00:27:38.823 Doorbell Buffer Config: Not Supported 00:27:38.823 Get LBA Status Capability: Not Supported 00:27:38.823 Command & Feature Lockdown Capability: Not Supported 00:27:38.823 Abort Command Limit: 4 00:27:38.823 Async Event Request Limit: 4 00:27:38.823 Number of Firmware Slots: N/A 00:27:38.823 Firmware Slot 1 Read-Only: N/A 00:27:38.823 Firmware Activation Without Reset: N/A 00:27:38.823 Multiple Update Detection Support: N/A 00:27:38.823 Firmware Update Granularity: No Information Provided 00:27:38.823 Per-Namespace SMART Log: No 00:27:38.823 Asymmetric Namespace Access Log Page: Not Supported 00:27:38.823 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:38.823 Command Effects Log Page: Supported 00:27:38.823 Get Log Page Extended Data: Supported 00:27:38.823 Telemetry Log Pages: Not Supported 00:27:38.824 Persistent Event Log Pages: Not Supported 00:27:38.824 Supported Log Pages Log Page: May Support 00:27:38.824 Commands Supported & Effects Log Page: Not Supported 00:27:38.824 Feature Identifiers & Effects Log Page:May Support 00:27:38.824 NVMe-MI Commands & Effects Log Page: May Support 00:27:38.824 Data Area 4 for Telemetry Log: Not Supported 00:27:38.824 Error Log Page Entries Supported: 128 00:27:38.824 Keep Alive: Supported 00:27:38.824 Keep Alive Granularity: 10000 ms 00:27:38.824 00:27:38.824 NVM Command Set Attributes 00:27:38.824 ========================== 00:27:38.824 Submission Queue Entry Size 00:27:38.824 Max: 64 00:27:38.824 Min: 64 00:27:38.824 Completion Queue Entry Size 00:27:38.824 Max: 16 00:27:38.824 Min: 16 00:27:38.824 Number of Namespaces: 32 00:27:38.824 Compare Command: Supported 00:27:38.824 Write Uncorrectable Command: Not Supported 00:27:38.824 Dataset Management Command: Supported 00:27:38.824 Write Zeroes Command: Supported 00:27:38.824 Set Features Save Field: Not Supported 00:27:38.824 Reservations: Supported 00:27:38.824 Timestamp: Not Supported 00:27:38.824 Copy: Supported 00:27:38.824 Volatile Write Cache: Present 00:27:38.824 Atomic Write Unit (Normal): 1 00:27:38.824 Atomic Write Unit (PFail): 1 00:27:38.824 Atomic Compare & Write Unit: 1 00:27:38.824 Fused Compare & Write: Supported 00:27:38.824 Scatter-Gather List 00:27:38.824 SGL Command Set: Supported 00:27:38.824 SGL Keyed: Supported 00:27:38.824 SGL Bit Bucket Descriptor: Not Supported 00:27:38.824 SGL Metadata Pointer: Not Supported 00:27:38.824 Oversized SGL: Not Supported 00:27:38.824 SGL Metadata Address: Not Supported 00:27:38.824 SGL Offset: Supported 00:27:38.824 Transport SGL Data Block: Not Supported 00:27:38.824 Replay Protected Memory Block: Not Supported 00:27:38.824 00:27:38.824 Firmware Slot Information 00:27:38.824 ========================= 00:27:38.824 Active slot: 1 00:27:38.824 Slot 1 Firmware Revision: 25.01 00:27:38.824 00:27:38.824 00:27:38.824 Commands Supported and Effects 00:27:38.824 ============================== 00:27:38.824 Admin Commands 00:27:38.824 -------------- 00:27:38.824 Get Log Page (02h): Supported 00:27:38.824 Identify (06h): Supported 00:27:38.824 Abort (08h): Supported 00:27:38.824 Set Features (09h): Supported 00:27:38.824 Get Features (0Ah): Supported 00:27:38.824 Asynchronous Event Request (0Ch): Supported 00:27:38.824 Keep Alive (18h): Supported 00:27:38.824 I/O Commands 00:27:38.824 ------------ 00:27:38.824 Flush (00h): Supported LBA-Change 00:27:38.824 Write (01h): Supported LBA-Change 00:27:38.824 Read (02h): Supported 00:27:38.824 Compare (05h): Supported 00:27:38.824 Write Zeroes (08h): Supported LBA-Change 00:27:38.824 Dataset Management (09h): Supported LBA-Change 00:27:38.824 Copy (19h): Supported LBA-Change 00:27:38.824 00:27:38.824 Error Log 00:27:38.824 ========= 00:27:38.824 00:27:38.824 Arbitration 00:27:38.824 =========== 00:27:38.824 Arbitration Burst: 1 00:27:38.824 00:27:38.824 Power Management 00:27:38.824 ================ 00:27:38.824 Number of Power States: 1 00:27:38.824 Current Power State: Power State #0 00:27:38.824 Power State #0: 00:27:38.824 Max Power: 0.00 W 00:27:38.824 Non-Operational State: Operational 00:27:38.824 Entry Latency: Not Reported 00:27:38.824 Exit Latency: Not Reported 00:27:38.824 Relative Read Throughput: 0 00:27:38.824 Relative Read Latency: 0 00:27:38.824 Relative Write Throughput: 0 00:27:38.824 Relative Write Latency: 0 00:27:38.824 Idle Power: Not Reported 00:27:38.824 Active Power: Not Reported 00:27:38.824 Non-Operational Permissive Mode: Not Supported 00:27:38.824 00:27:38.824 Health Information 00:27:38.824 ================== 00:27:38.824 Critical Warnings: 00:27:38.824 Available Spare Space: OK 00:27:38.824 Temperature: OK 00:27:38.824 Device Reliability: OK 00:27:38.824 Read Only: No 00:27:38.824 Volatile Memory Backup: OK 00:27:38.824 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:38.824 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:38.824 Available Spare: 0% 00:27:38.824 Available Spare Threshold: 0% 00:27:38.824 Life Percentage [2024-11-17 04:44:17.404619] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404651] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404663] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404689] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:38.824 [2024-11-17 04:44:17.404698] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12550 doesn't match qid 00:27:38.824 [2024-11-17 04:44:17.404711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:5e166e60 sqhd:4320 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404718] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12550 doesn't match qid 00:27:38.824 [2024-11-17 04:44:17.404726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:5e166e60 sqhd:4320 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404732] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12550 doesn't match qid 00:27:38.824 [2024-11-17 04:44:17.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:5e166e60 sqhd:4320 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404746] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12550 doesn't match qid 00:27:38.824 [2024-11-17 04:44:17.404755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:5e166e60 sqhd:4320 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404763] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404788] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.404794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404802] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404816] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404834] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.404839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404846] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:38.824 [2024-11-17 04:44:17.404852] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:38.824 [2024-11-17 04:44:17.404858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404866] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.404903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404910] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404918] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404946] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.404958] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404966] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.404974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.404997] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.824 [2024-11-17 04:44:17.405003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:38.824 [2024-11-17 04:44:17.405009] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.405018] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.824 [2024-11-17 04:44:17.405026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.824 [2024-11-17 04:44:17.405043] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405055] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405063] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405091] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405103] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405112] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405140] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405151] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405160] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405186] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405199] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405208] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405231] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405243] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405252] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405282] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405296] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405304] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405329] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405341] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405351] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405383] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405396] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405404] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405432] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405444] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405453] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405476] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405488] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405497] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405523] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405535] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405544] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405569] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405581] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405591] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405614] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405626] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405660] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405672] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405680] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405706] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:38.825 [2024-11-17 04:44:17.405717] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405726] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.825 [2024-11-17 04:44:17.405734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.825 [2024-11-17 04:44:17.405753] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.825 [2024-11-17 04:44:17.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.405765] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405774] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.405801] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.405806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.405813] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405821] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.405848] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.405854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.405860] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405870] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.405897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.405903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.405909] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405918] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.405945] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.405951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.405957] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405966] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.405974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.405993] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406005] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406013] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406042] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406054] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406063] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406088] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406100] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406108] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406132] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406147] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406155] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406185] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406196] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406205] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406234] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406246] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406255] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406278] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406290] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406299] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406329] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406341] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406350] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406373] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406385] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406394] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406423] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406436] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406444] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406469] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406481] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406490] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406521] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406532] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406541] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406566] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.826 [2024-11-17 04:44:17.406572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:38.826 [2024-11-17 04:44:17.406578] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406587] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.826 [2024-11-17 04:44:17.406594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.826 [2024-11-17 04:44:17.406614] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406626] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406634] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406658] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406669] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406678] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406703] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406716] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406725] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406752] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406764] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406772] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406796] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406807] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406816] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406847] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406867] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.406898] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.406904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.406910] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406919] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.406926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.410941] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.410948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.410954] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.410963] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.410971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:38.827 [2024-11-17 04:44:17.410987] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:38.827 [2024-11-17 04:44:17.410994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:27:38.827 [2024-11-17 04:44:17.411000] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180700 00:27:38.827 [2024-11-17 04:44:17.411007] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:27:38.827 Used: 0% 00:27:38.827 Data Units Read: 0 00:27:38.827 Data Units Written: 0 00:27:38.827 Host Read Commands: 0 00:27:38.827 Host Write Commands: 0 00:27:38.827 Controller Busy Time: 0 minutes 00:27:38.827 Power Cycles: 0 00:27:38.827 Power On Hours: 0 hours 00:27:38.827 Unsafe Shutdowns: 0 00:27:38.827 Unrecoverable Media Errors: 0 00:27:38.827 Lifetime Error Log Entries: 0 00:27:38.827 Warning Temperature Time: 0 minutes 00:27:38.827 Critical Temperature Time: 0 minutes 00:27:38.827 00:27:38.827 Number of Queues 00:27:38.827 ================ 00:27:38.827 Number of I/O Submission Queues: 127 00:27:38.827 Number of I/O Completion Queues: 127 00:27:38.827 00:27:38.827 Active Namespaces 00:27:38.827 ================= 00:27:38.827 Namespace ID:1 00:27:38.827 Error Recovery Timeout: Unlimited 00:27:38.827 Command Set Identifier: NVM (00h) 00:27:38.827 Deallocate: Supported 00:27:38.827 Deallocated/Unwritten Error: Not Supported 00:27:38.827 Deallocated Read Value: Unknown 00:27:38.827 Deallocate in Write Zeroes: Not Supported 00:27:38.827 Deallocated Guard Field: 0xFFFF 00:27:38.827 Flush: Supported 00:27:38.827 Reservation: Supported 00:27:38.827 Namespace Sharing Capabilities: Multiple Controllers 00:27:38.827 Size (in LBAs): 131072 (0GiB) 00:27:38.827 Capacity (in LBAs): 131072 (0GiB) 00:27:38.827 Utilization (in LBAs): 131072 (0GiB) 00:27:38.827 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:38.827 EUI64: ABCDEF0123456789 00:27:38.827 UUID: 2f94e78f-4914-44f7-aa1d-ce386b0c0ef9 00:27:38.827 Thin Provisioning: Not Supported 00:27:38.827 Per-NS Atomic Units: Yes 00:27:38.827 Atomic Boundary Size (Normal): 0 00:27:38.827 Atomic Boundary Size (PFail): 0 00:27:38.827 Atomic Boundary Offset: 0 00:27:38.827 Maximum Single Source Range Length: 65535 00:27:38.827 Maximum Copy Length: 65535 00:27:38.827 Maximum Source Range Count: 1 00:27:38.827 NGUID/EUI64 Never Reused: No 00:27:38.827 Namespace Write Protected: No 00:27:38.827 Number of LBA Formats: 1 00:27:38.827 Current LBA Format: LBA Format #00 00:27:38.827 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:38.827 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:38.827 rmmod nvme_rdma 00:27:38.827 rmmod nvme_fabrics 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 423257 ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 423257 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 423257 ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 423257 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 423257 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 423257' 00:27:38.827 killing process with pid 423257 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 423257 00:27:38.827 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 423257 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:39.087 00:27:39.087 real 0m8.746s 00:27:39.087 user 0m6.398s 00:27:39.087 sys 0m6.091s 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:39.087 ************************************ 00:27:39.087 END TEST nvmf_identify 00:27:39.087 ************************************ 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.087 04:44:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.347 ************************************ 00:27:39.347 START TEST nvmf_perf 00:27:39.347 ************************************ 00:27:39.347 04:44:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:39.347 * Looking for test storage... 00:27:39.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.347 --rc genhtml_branch_coverage=1 00:27:39.347 --rc genhtml_function_coverage=1 00:27:39.347 --rc genhtml_legend=1 00:27:39.347 --rc geninfo_all_blocks=1 00:27:39.347 --rc geninfo_unexecuted_blocks=1 00:27:39.347 00:27:39.347 ' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.347 --rc genhtml_branch_coverage=1 00:27:39.347 --rc genhtml_function_coverage=1 00:27:39.347 --rc genhtml_legend=1 00:27:39.347 --rc geninfo_all_blocks=1 00:27:39.347 --rc geninfo_unexecuted_blocks=1 00:27:39.347 00:27:39.347 ' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.347 --rc genhtml_branch_coverage=1 00:27:39.347 --rc genhtml_function_coverage=1 00:27:39.347 --rc genhtml_legend=1 00:27:39.347 --rc geninfo_all_blocks=1 00:27:39.347 --rc geninfo_unexecuted_blocks=1 00:27:39.347 00:27:39.347 ' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.347 --rc genhtml_branch_coverage=1 00:27:39.347 --rc genhtml_function_coverage=1 00:27:39.347 --rc genhtml_legend=1 00:27:39.347 --rc geninfo_all_blocks=1 00:27:39.347 --rc geninfo_unexecuted_blocks=1 00:27:39.347 00:27:39.347 ' 00:27:39.347 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.348 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.348 04:44:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:47.472 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:47.472 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:47.472 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:47.472 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:27:47.472 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:47.473 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.473 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:47.473 altname enp217s0f0np0 00:27:47.473 altname ens818f0np0 00:27:47.473 inet 192.168.100.8/24 scope global mlx_0_0 00:27:47.473 valid_lft forever preferred_lft forever 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:47.473 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.473 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:47.473 altname enp217s0f1np1 00:27:47.473 altname ens818f1np1 00:27:47.473 inet 192.168.100.9/24 scope global mlx_0_1 00:27:47.473 valid_lft forever preferred_lft forever 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:47.473 192.168.100.9' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:47.473 192.168.100.9' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:47.473 192.168.100.9' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=426848 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 426848 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 426848 ']' 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.473 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.473 [2024-11-17 04:44:25.450292] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:47.474 [2024-11-17 04:44:25.450343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.474 [2024-11-17 04:44:25.539669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.474 [2024-11-17 04:44:25.562190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.474 [2024-11-17 04:44:25.562227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.474 [2024-11-17 04:44:25.562239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.474 [2024-11-17 04:44:25.562247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.474 [2024-11-17 04:44:25.562254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.474 [2024-11-17 04:44:25.563985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.474 [2024-11-17 04:44:25.564099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.474 [2024-11-17 04:44:25.564209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.474 [2024-11-17 04:44:25.564211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:47.474 04:44:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:50.010 04:44:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:50.010 04:44:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:50.269 04:44:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:27:50.269 04:44:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:50.527 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:50.527 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:27:50.527 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:50.527 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:27:50.527 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:27:50.785 [2024-11-17 04:44:29.360335] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:27:50.785 [2024-11-17 04:44:29.381380] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x166df70/0x15605c0) succeed. 00:27:50.785 [2024-11-17 04:44:29.391021] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1670610/0x15a1c60) succeed. 00:27:50.785 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.044 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:51.044 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:51.303 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:51.303 04:44:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:51.303 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:51.562 [2024-11-17 04:44:30.291173] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:51.562 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:51.821 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:27:51.821 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:51.821 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:51.821 04:44:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:53.202 Initializing NVMe Controllers 00:27:53.202 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:27:53.202 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:27:53.202 Initialization complete. Launching workers. 00:27:53.202 ======================================================== 00:27:53.202 Latency(us) 00:27:53.202 Device Information : IOPS MiB/s Average min max 00:27:53.202 PCIE (0000:d8:00.0) NSID 1 from core 0: 100696.32 393.35 317.35 20.96 7188.59 00:27:53.202 ======================================================== 00:27:53.202 Total : 100696.32 393.35 317.35 20.96 7188.59 00:27:53.202 00:27:53.202 04:44:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:56.490 Initializing NVMe Controllers 00:27:56.490 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.490 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:56.490 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:56.490 Initialization complete. Launching workers. 00:27:56.490 ======================================================== 00:27:56.491 Latency(us) 00:27:56.491 Device Information : IOPS MiB/s Average min max 00:27:56.491 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6621.10 25.86 150.68 48.28 7047.54 00:27:56.491 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5122.93 20.01 194.82 60.99 7068.00 00:27:56.491 ======================================================== 00:27:56.491 Total : 11744.03 45.88 169.93 48.28 7068.00 00:27:56.491 00:27:56.491 04:44:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:59.783 Initializing NVMe Controllers 00:27:59.783 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.783 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.783 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.783 Initialization complete. Launching workers. 00:27:59.783 ======================================================== 00:27:59.783 Latency(us) 00:27:59.783 Device Information : IOPS MiB/s Average min max 00:27:59.783 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18458.64 72.10 1727.51 491.28 7253.02 00:27:59.783 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4028.87 15.74 7971.19 5850.19 10149.74 00:27:59.783 ======================================================== 00:27:59.783 Total : 22487.50 87.84 2846.13 491.28 10149.74 00:27:59.783 00:27:59.783 04:44:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:27:59.783 04:44:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:05.057 Initializing NVMe Controllers 00:28:05.057 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.057 Controller IO queue size 128, less than required. 00:28:05.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.057 Controller IO queue size 128, less than required. 00:28:05.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:05.058 Initialization complete. Launching workers. 00:28:05.058 ======================================================== 00:28:05.058 Latency(us) 00:28:05.058 Device Information : IOPS MiB/s Average min max 00:28:05.058 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3968.00 992.00 32517.88 15269.94 85676.12 00:28:05.058 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3976.00 994.00 31783.59 15397.71 75760.10 00:28:05.058 ======================================================== 00:28:05.058 Total : 7944.00 1986.00 32150.36 15269.94 85676.12 00:28:05.058 00:28:05.058 04:44:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:28:05.058 No valid NVMe controllers or AIO or URING devices found 00:28:05.058 Initializing NVMe Controllers 00:28:05.058 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.058 Controller IO queue size 128, less than required. 00:28:05.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:05.058 Controller IO queue size 128, less than required. 00:28:05.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:05.058 WARNING: Some requested NVMe devices were skipped 00:28:05.058 04:44:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:28:09.255 Initializing NVMe Controllers 00:28:09.256 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.256 Controller IO queue size 128, less than required. 00:28:09.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.256 Controller IO queue size 128, less than required. 00:28:09.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.256 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.256 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:09.256 Initialization complete. Launching workers. 00:28:09.256 00:28:09.256 ==================== 00:28:09.256 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:09.256 RDMA transport: 00:28:09.256 dev name: mlx5_0 00:28:09.256 polls: 404100 00:28:09.256 idle_polls: 400507 00:28:09.256 completions: 45374 00:28:09.256 queued_requests: 1 00:28:09.256 total_send_wrs: 22687 00:28:09.256 send_doorbell_updates: 3382 00:28:09.256 total_recv_wrs: 22814 00:28:09.256 recv_doorbell_updates: 3383 00:28:09.256 --------------------------------- 00:28:09.256 00:28:09.256 ==================== 00:28:09.256 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:09.256 RDMA transport: 00:28:09.256 dev name: mlx5_0 00:28:09.256 polls: 406001 00:28:09.256 idle_polls: 405725 00:28:09.256 completions: 20146 00:28:09.256 queued_requests: 1 00:28:09.256 total_send_wrs: 10073 00:28:09.256 send_doorbell_updates: 252 00:28:09.256 total_recv_wrs: 10200 00:28:09.256 recv_doorbell_updates: 253 00:28:09.256 --------------------------------- 00:28:09.256 ======================================================== 00:28:09.256 Latency(us) 00:28:09.256 Device Information : IOPS MiB/s Average min max 00:28:09.256 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5669.67 1417.42 22645.11 11544.42 66123.88 00:28:09.256 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2517.19 629.30 50772.64 31275.94 78622.30 00:28:09.256 ======================================================== 00:28:09.256 Total : 8186.86 2046.71 31293.40 11544.42 78622.30 00:28:09.256 00:28:09.256 04:44:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:09.256 04:44:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.256 04:44:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:09.256 04:44:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:28:09.256 04:44:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=222852f8-2367-459a-afa3-d89f6ab9c0e4 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 222852f8-2367-459a-afa3-d89f6ab9c0e4 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=222852f8-2367-459a-afa3-d89f6ab9c0e4 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:28:15.851 { 00:28:15.851 "uuid": "222852f8-2367-459a-afa3-d89f6ab9c0e4", 00:28:15.851 "name": "lvs_0", 00:28:15.851 "base_bdev": "Nvme0n1", 00:28:15.851 "total_data_clusters": 476466, 00:28:15.851 "free_clusters": 476466, 00:28:15.851 "block_size": 512, 00:28:15.851 "cluster_size": 4194304 00:28:15.851 } 00:28:15.851 ]' 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="222852f8-2367-459a-afa3-d89f6ab9c0e4") .free_clusters' 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="222852f8-2367-459a-afa3-d89f6ab9c0e4") .cluster_size' 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:28:15.851 1905864 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:15.851 04:44:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 222852f8-2367-459a-afa3-d89f6ab9c0e4 lbd_0 20480 00:28:16.790 04:44:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=bf38b2a6-7785-4e60-9ec2-7249155609d3 00:28:16.790 04:44:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore bf38b2a6-7785-4e60-9ec2-7249155609d3 lvs_n_0 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=dc592af4-5ac8-41b8-a6fa-bae28e1d2a03 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb dc592af4-5ac8-41b8-a6fa-bae28e1d2a03 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=dc592af4-5ac8-41b8-a6fa-bae28e1d2a03 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:28:17.728 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:17.987 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:28:17.987 { 00:28:17.987 "uuid": "222852f8-2367-459a-afa3-d89f6ab9c0e4", 00:28:17.987 "name": "lvs_0", 00:28:17.987 "base_bdev": "Nvme0n1", 00:28:17.987 "total_data_clusters": 476466, 00:28:17.987 "free_clusters": 471346, 00:28:17.987 "block_size": 512, 00:28:17.987 "cluster_size": 4194304 00:28:17.987 }, 00:28:17.987 { 00:28:17.987 "uuid": "dc592af4-5ac8-41b8-a6fa-bae28e1d2a03", 00:28:17.987 "name": "lvs_n_0", 00:28:17.987 "base_bdev": "bf38b2a6-7785-4e60-9ec2-7249155609d3", 00:28:17.987 "total_data_clusters": 5114, 00:28:17.987 "free_clusters": 5114, 00:28:17.987 "block_size": 512, 00:28:17.987 "cluster_size": 4194304 00:28:17.987 } 00:28:17.987 ]' 00:28:17.987 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="dc592af4-5ac8-41b8-a6fa-bae28e1d2a03") .free_clusters' 00:28:17.987 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="dc592af4-5ac8-41b8-a6fa-bae28e1d2a03") .cluster_size' 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:28:17.988 20456 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:17.988 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc592af4-5ac8-41b8-a6fa-bae28e1d2a03 lbd_nest_0 20456 00:28:18.247 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9d6e2d4a-c128-463d-921b-997562576b17 00:28:18.247 04:44:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:18.507 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:18.507 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9d6e2d4a-c128-463d-921b-997562576b17 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:18.766 04:44:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:30.981 Initializing NVMe Controllers 00:28:30.981 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.981 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.981 Initialization complete. Launching workers. 00:28:30.981 ======================================================== 00:28:30.981 Latency(us) 00:28:30.981 Device Information : IOPS MiB/s Average min max 00:28:30.981 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5868.90 2.87 170.09 68.88 7046.27 00:28:30.981 ======================================================== 00:28:30.981 Total : 5868.90 2.87 170.09 68.88 7046.27 00:28:30.981 00:28:30.981 04:45:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:30.981 04:45:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:43.195 Initializing NVMe Controllers 00:28:43.195 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.195 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.195 Initialization complete. Launching workers. 00:28:43.195 ======================================================== 00:28:43.195 Latency(us) 00:28:43.195 Device Information : IOPS MiB/s Average min max 00:28:43.195 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2634.38 329.30 379.04 156.60 8148.58 00:28:43.195 ======================================================== 00:28:43.195 Total : 2634.38 329.30 379.04 156.60 8148.58 00:28:43.195 00:28:43.195 04:45:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:43.195 04:45:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:43.195 04:45:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:53.187 Initializing NVMe Controllers 00:28:53.187 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.187 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.187 Initialization complete. Launching workers. 00:28:53.187 ======================================================== 00:28:53.187 Latency(us) 00:28:53.187 Device Information : IOPS MiB/s Average min max 00:28:53.187 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11224.11 5.48 2850.30 1021.42 10168.70 00:28:53.187 ======================================================== 00:28:53.187 Total : 11224.11 5.48 2850.30 1021.42 10168.70 00:28:53.187 00:28:53.187 04:45:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:53.187 04:45:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:05.395 Initializing NVMe Controllers 00:29:05.395 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.395 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.395 Initialization complete. Launching workers. 00:29:05.395 ======================================================== 00:29:05.395 Latency(us) 00:29:05.395 Device Information : IOPS MiB/s Average min max 00:29:05.395 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3995.00 499.38 8015.24 4899.15 16002.89 00:29:05.395 ======================================================== 00:29:05.395 Total : 3995.00 499.38 8015.24 4899.15 16002.89 00:29:05.395 00:29:05.395 04:45:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:05.395 04:45:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:05.395 04:45:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:15.667 Initializing NVMe Controllers 00:29:15.667 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.667 Controller IO queue size 128, less than required. 00:29:15.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:15.667 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:15.667 Initialization complete. Launching workers. 00:29:15.667 ======================================================== 00:29:15.667 Latency(us) 00:29:15.667 Device Information : IOPS MiB/s Average min max 00:29:15.667 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18798.49 9.18 6811.38 2054.52 15835.38 00:29:15.667 ======================================================== 00:29:15.667 Total : 18798.49 9.18 6811.38 2054.52 15835.38 00:29:15.667 00:29:15.667 04:45:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:15.667 04:45:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:27.880 Initializing NVMe Controllers 00:29:27.880 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.880 Controller IO queue size 128, less than required. 00:29:27.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.881 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.881 Initialization complete. Launching workers. 00:29:27.881 ======================================================== 00:29:27.881 Latency(us) 00:29:27.881 Device Information : IOPS MiB/s Average min max 00:29:27.881 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11036.40 1379.55 11597.61 3375.85 23779.24 00:29:27.881 ======================================================== 00:29:27.881 Total : 11036.40 1379.55 11597.61 3375.85 23779.24 00:29:27.881 00:29:27.881 04:46:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.881 04:46:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d6e2d4a-c128-463d-921b-997562576b17 00:29:27.881 04:46:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:28.140 04:46:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf38b2a6-7785-4e60-9ec2-7249155609d3 00:29:28.398 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:28.657 rmmod nvme_rdma 00:29:28.657 rmmod nvme_fabrics 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 426848 ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 426848 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 426848 ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 426848 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426848 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426848' 00:29:28.657 killing process with pid 426848 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 426848 00:29:28.657 04:46:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 426848 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:31.191 00:29:31.191 real 1m52.014s 00:29:31.191 user 7m1.389s 00:29:31.191 sys 0m7.607s 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.191 ************************************ 00:29:31.191 END TEST nvmf_perf 00:29:31.191 ************************************ 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.191 04:46:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.451 ************************************ 00:29:31.451 START TEST nvmf_fio_host 00:29:31.451 ************************************ 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:31.451 * Looking for test storage... 00:29:31.451 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.451 --rc genhtml_branch_coverage=1 00:29:31.451 --rc genhtml_function_coverage=1 00:29:31.451 --rc genhtml_legend=1 00:29:31.451 --rc geninfo_all_blocks=1 00:29:31.451 --rc geninfo_unexecuted_blocks=1 00:29:31.451 00:29:31.451 ' 00:29:31.451 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.451 --rc genhtml_branch_coverage=1 00:29:31.451 --rc genhtml_function_coverage=1 00:29:31.451 --rc genhtml_legend=1 00:29:31.451 --rc geninfo_all_blocks=1 00:29:31.451 --rc geninfo_unexecuted_blocks=1 00:29:31.451 00:29:31.451 ' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.452 --rc genhtml_branch_coverage=1 00:29:31.452 --rc genhtml_function_coverage=1 00:29:31.452 --rc genhtml_legend=1 00:29:31.452 --rc geninfo_all_blocks=1 00:29:31.452 --rc geninfo_unexecuted_blocks=1 00:29:31.452 00:29:31.452 ' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.452 --rc genhtml_branch_coverage=1 00:29:31.452 --rc genhtml_function_coverage=1 00:29:31.452 --rc genhtml_legend=1 00:29:31.452 --rc geninfo_all_blocks=1 00:29:31.452 --rc geninfo_unexecuted_blocks=1 00:29:31.452 00:29:31.452 ' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:31.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:31.452 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.453 04:46:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.580 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:39.581 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:39.581 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:39.581 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:39.581 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:39.581 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.581 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:39.581 altname enp217s0f0np0 00:29:39.581 altname ens818f0np0 00:29:39.581 inet 192.168.100.8/24 scope global mlx_0_0 00:29:39.581 valid_lft forever preferred_lft forever 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:39.581 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.581 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:39.581 altname enp217s0f1np1 00:29:39.581 altname ens818f1np1 00:29:39.581 inet 192.168.100.9/24 scope global mlx_0_1 00:29:39.581 valid_lft forever preferred_lft forever 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.581 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:39.582 192.168.100.9' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:39.582 192.168.100.9' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:39.582 192.168.100.9' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=447879 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 447879 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 447879 ']' 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.582 [2024-11-17 04:46:17.508787] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:39.582 [2024-11-17 04:46:17.508836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.582 [2024-11-17 04:46:17.600998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.582 [2024-11-17 04:46:17.623407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.582 [2024-11-17 04:46:17.623445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.582 [2024-11-17 04:46:17.623454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.582 [2024-11-17 04:46:17.623463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.582 [2024-11-17 04:46:17.623470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.582 [2024-11-17 04:46:17.625178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.582 [2024-11-17 04:46:17.625293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.582 [2024-11-17 04:46:17.625400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.582 [2024-11-17 04:46:17.625402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:29:39.582 04:46:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:39.582 [2024-11-17 04:46:17.910080] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aacc50/0x1ab1100) succeed. 00:29:39.582 [2024-11-17 04:46:17.919281] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aae290/0x1af27a0) succeed. 00:29:39.582 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:39.582 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.582 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.582 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:39.582 Malloc1 00:29:39.583 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.842 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:40.101 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:40.101 [2024-11-17 04:46:18.900401] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:40.101 04:46:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:40.361 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:40.362 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:40.362 04:46:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:40.942 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:40.942 fio-3.35 00:29:40.942 Starting 1 thread 00:29:43.474 00:29:43.474 test: (groupid=0, jobs=1): err= 0: pid=448450: Sun Nov 17 04:46:21 2024 00:29:43.474 read: IOPS=18.1k, BW=70.7MiB/s (74.1MB/s)(142MiB/2004msec) 00:29:43.474 slat (nsec): min=1344, max=42098, avg=1509.62, stdev=542.54 00:29:43.474 clat (usec): min=2107, max=6371, avg=3514.11, stdev=96.60 00:29:43.474 lat (usec): min=2129, max=6373, avg=3515.62, stdev=96.53 00:29:43.474 clat percentiles (usec): 00:29:43.474 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:29:43.474 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:29:43.474 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3556], 00:29:43.474 | 99.00th=[ 3621], 99.50th=[ 3851], 99.90th=[ 5014], 99.95th=[ 5866], 00:29:43.474 | 99.99th=[ 6325] 00:29:43.474 bw ( KiB/s): min=70968, max=73232, per=99.99%, avg=72344.00, stdev=979.43, samples=4 00:29:43.474 iops : min=17742, max=18308, avg=18086.00, stdev=244.86, samples=4 00:29:43.474 write: IOPS=18.1k, BW=70.7MiB/s (74.2MB/s)(142MiB/2004msec); 0 zone resets 00:29:43.474 slat (nsec): min=1383, max=18478, avg=1586.20, stdev=583.96 00:29:43.474 clat (usec): min=2141, max=6360, avg=3511.78, stdev=77.94 00:29:43.474 lat (usec): min=2152, max=6361, avg=3513.37, stdev=77.87 00:29:43.474 clat percentiles (usec): 00:29:43.474 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:29:43.474 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:29:43.474 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3523], 00:29:43.474 | 99.00th=[ 3589], 99.50th=[ 3851], 99.90th=[ 4359], 99.95th=[ 5014], 00:29:43.474 | 99.99th=[ 6259] 00:29:43.474 bw ( KiB/s): min=71096, max=73152, per=100.00%, avg=72484.00, stdev=945.28, samples=4 00:29:43.474 iops : min=17774, max=18288, avg=18121.00, stdev=236.32, samples=4 00:29:43.474 lat (msec) : 4=99.82%, 10=0.18% 00:29:43.474 cpu : usr=99.55%, sys=0.05%, ctx=14, majf=0, minf=3 00:29:43.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:43.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:43.474 issued rwts: total=36247,36294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:43.474 00:29:43.474 Run status group 0 (all jobs): 00:29:43.474 READ: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=142MiB (148MB), run=2004-2004msec 00:29:43.474 WRITE: bw=70.7MiB/s (74.2MB/s), 70.7MiB/s-70.7MiB/s (74.2MB/s-74.2MB/s), io=142MiB (149MB), run=2004-2004msec 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:43.474 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:43.475 04:46:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:43.475 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:43.475 fio-3.35 00:29:43.475 Starting 1 thread 00:29:46.006 00:29:46.006 test: (groupid=0, jobs=1): err= 0: pid=448965: Sun Nov 17 04:46:24 2024 00:29:46.006 read: IOPS=14.6k, BW=229MiB/s (240MB/s)(452MiB/1974msec) 00:29:46.006 slat (nsec): min=2224, max=56101, avg=2577.05, stdev=1029.11 00:29:46.006 clat (usec): min=494, max=8704, avg=1553.88, stdev=1206.02 00:29:46.006 lat (usec): min=497, max=8724, avg=1556.46, stdev=1206.41 00:29:46.006 clat percentiles (usec): 00:29:46.006 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 914], 00:29:46.006 | 30.00th=[ 979], 40.00th=[ 1074], 50.00th=[ 1172], 60.00th=[ 1287], 00:29:46.006 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 3195], 95.00th=[ 4817], 00:29:46.006 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7439], 00:29:46.006 | 99.99th=[ 8717] 00:29:46.006 bw ( KiB/s): min=107168, max=116320, per=48.48%, avg=113558.25, stdev=4286.02, samples=4 00:29:46.006 iops : min= 6698, max= 7270, avg=7097.25, stdev=267.81, samples=4 00:29:46.006 write: IOPS=8137, BW=127MiB/s (133MB/s)(230MiB/1806msec); 0 zone resets 00:29:46.007 slat (usec): min=26, max=130, avg=28.69, stdev= 5.77 00:29:46.007 clat (usec): min=4130, max=20763, avg=12598.88, stdev=1886.67 00:29:46.007 lat (usec): min=4158, max=20790, avg=12627.57, stdev=1886.29 00:29:46.007 clat percentiles (usec): 00:29:46.007 | 1.00th=[ 7242], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11207], 00:29:46.007 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:29:46.007 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14746], 95.00th=[15795], 00:29:46.007 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19792], 99.95th=[20055], 00:29:46.007 | 99.99th=[20841] 00:29:46.007 bw ( KiB/s): min=112672, max=120960, per=90.25%, avg=117517.75, stdev=3914.27, samples=4 00:29:46.007 iops : min= 7042, max= 7560, avg=7344.75, stdev=244.70, samples=4 00:29:46.007 lat (usec) : 500=0.01%, 750=2.44%, 1000=19.00% 00:29:46.007 lat (msec) : 2=37.03%, 4=2.46%, 10=7.69%, 20=31.36%, 50=0.02% 00:29:46.007 cpu : usr=96.36%, sys=1.69%, ctx=213, majf=0, minf=3 00:29:46.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:46.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.007 issued rwts: total=28899,14697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.007 00:29:46.007 Run status group 0 (all jobs): 00:29:46.007 READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=452MiB (473MB), run=1974-1974msec 00:29:46.007 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=230MiB (241MB), run=1806-1806msec 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:46.007 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:46.266 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:46.266 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:29:46.266 04:46:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:29:49.555 Nvme0n1 00:29:49.555 04:46:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=adf09024-cf4d-4a7d-8000-efae3a2b285d 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb adf09024-cf4d-4a7d-8000-efae3a2b285d 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=adf09024-cf4d-4a7d-8000-efae3a2b285d 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:54.832 { 00:29:54.832 "uuid": "adf09024-cf4d-4a7d-8000-efae3a2b285d", 00:29:54.832 "name": "lvs_0", 00:29:54.832 "base_bdev": "Nvme0n1", 00:29:54.832 "total_data_clusters": 1862, 00:29:54.832 "free_clusters": 1862, 00:29:54.832 "block_size": 512, 00:29:54.832 "cluster_size": 1073741824 00:29:54.832 } 00:29:54.832 ]' 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="adf09024-cf4d-4a7d-8000-efae3a2b285d") .free_clusters' 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:29:54.832 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="adf09024-cf4d-4a7d-8000-efae3a2b285d") .cluster_size' 00:29:55.091 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:29:55.092 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:29:55.092 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:29:55.092 1906688 00:29:55.092 04:46:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:29:55.660 f97e8a51-7fb2-48b3-ae28-a655aeb17b46 00:29:55.660 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:55.660 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:55.919 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:56.178 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:56.179 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:56.179 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:56.179 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.179 04:46:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:56.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:56.437 fio-3.35 00:29:56.437 Starting 1 thread 00:29:58.972 00:29:58.972 test: (groupid=0, jobs=1): err= 0: pid=451309: Sun Nov 17 04:46:37 2024 00:29:58.972 read: IOPS=9767, BW=38.2MiB/s (40.0MB/s)(76.5MiB/2005msec) 00:29:58.972 slat (nsec): min=1348, max=20494, avg=1477.33, stdev=238.70 00:29:58.972 clat (usec): min=163, max=332549, avg=6505.22, stdev=18769.24 00:29:58.972 lat (usec): min=164, max=332552, avg=6506.70, stdev=18769.28 00:29:58.972 clat percentiles (msec): 00:29:58.972 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:58.972 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:58.972 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:58.972 | 99.00th=[ 6], 99.50th=[ 8], 99.90th=[ 334], 99.95th=[ 334], 00:29:58.972 | 99.99th=[ 334] 00:29:58.972 bw ( KiB/s): min=14888, max=47288, per=99.95%, avg=39050.00, stdev=16108.67, samples=4 00:29:58.972 iops : min= 3722, max=11822, avg=9762.50, stdev=4027.17, samples=4 00:29:58.972 write: IOPS=9780, BW=38.2MiB/s (40.1MB/s)(76.6MiB/2005msec); 0 zone resets 00:29:58.972 slat (nsec): min=1381, max=6171, avg=1570.63, stdev=178.10 00:29:58.972 clat (usec): min=180, max=332858, avg=6476.68, stdev=18243.42 00:29:58.972 lat (usec): min=181, max=332862, avg=6478.25, stdev=18243.48 00:29:58.972 clat percentiles (msec): 00:29:58.972 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:58.972 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:58.972 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:58.972 | 99.00th=[ 6], 99.50th=[ 8], 99.90th=[ 334], 99.95th=[ 334], 00:29:58.972 | 99.99th=[ 334] 00:29:58.972 bw ( KiB/s): min=15528, max=46968, per=99.90%, avg=39082.00, stdev=15702.69, samples=4 00:29:58.972 iops : min= 3882, max=11742, avg=9770.50, stdev=3925.67, samples=4 00:29:58.972 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:58.972 lat (msec) : 2=0.05%, 4=0.21%, 10=99.38%, 500=0.33% 00:29:58.972 cpu : usr=99.60%, sys=0.05%, ctx=15, majf=0, minf=3 00:29:58.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:58.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.972 issued rwts: total=19584,19609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.972 00:29:58.972 Run status group 0 (all jobs): 00:29:58.972 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2005-2005msec 00:29:58.972 WRITE: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=76.6MiB (80.3MB), run=2005-2005msec 00:29:58.972 04:46:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:59.231 04:46:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:00.169 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3b641b3d-a6b5-43b5-ae76-cb32373fa20c 00:30:00.169 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3b641b3d-a6b5-43b5-ae76-cb32373fa20c 00:30:00.169 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=3b641b3d-a6b5-43b5-ae76-cb32373fa20c 00:30:00.169 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:00.169 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:30:00.441 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:30:00.442 04:46:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:00.442 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:00.442 { 00:30:00.442 "uuid": "adf09024-cf4d-4a7d-8000-efae3a2b285d", 00:30:00.442 "name": "lvs_0", 00:30:00.442 "base_bdev": "Nvme0n1", 00:30:00.442 "total_data_clusters": 1862, 00:30:00.442 "free_clusters": 0, 00:30:00.442 "block_size": 512, 00:30:00.442 "cluster_size": 1073741824 00:30:00.442 }, 00:30:00.442 { 00:30:00.442 "uuid": "3b641b3d-a6b5-43b5-ae76-cb32373fa20c", 00:30:00.442 "name": "lvs_n_0", 00:30:00.442 "base_bdev": "f97e8a51-7fb2-48b3-ae28-a655aeb17b46", 00:30:00.442 "total_data_clusters": 476206, 00:30:00.442 "free_clusters": 476206, 00:30:00.442 "block_size": 512, 00:30:00.442 "cluster_size": 4194304 00:30:00.442 } 00:30:00.442 ]' 00:30:00.442 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3b641b3d-a6b5-43b5-ae76-cb32373fa20c") .free_clusters' 00:30:00.442 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:30:00.442 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3b641b3d-a6b5-43b5-ae76-cb32373fa20c") .cluster_size' 00:30:00.704 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:00.704 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:30:00.704 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:30:00.704 1904824 00:30:00.704 04:46:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:30:01.642 5d2f522b-91d5-41c0-83d7-dc1344ad1584 00:30:01.642 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:01.642 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:01.901 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:02.160 04:46:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:02.418 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:02.418 fio-3.35 00:30:02.418 Starting 1 thread 00:30:04.953 00:30:04.953 test: (groupid=0, jobs=1): err= 0: pid=452442: Sun Nov 17 04:46:43 2024 00:30:04.953 read: IOPS=10.3k, BW=40.1MiB/s (42.0MB/s)(80.4MiB/2006msec) 00:30:04.953 slat (nsec): min=1349, max=24676, avg=1474.76, stdev=223.09 00:30:04.953 clat (usec): min=2674, max=11012, avg=6167.42, stdev=165.04 00:30:04.953 lat (usec): min=2676, max=11013, avg=6168.89, stdev=165.00 00:30:04.953 clat percentiles (usec): 00:30:04.953 | 1.00th=[ 6063], 5.00th=[ 6128], 10.00th=[ 6128], 20.00th=[ 6128], 00:30:04.953 | 30.00th=[ 6128], 40.00th=[ 6194], 50.00th=[ 6194], 60.00th=[ 6194], 00:30:04.953 | 70.00th=[ 6194], 80.00th=[ 6194], 90.00th=[ 6194], 95.00th=[ 6194], 00:30:04.953 | 99.00th=[ 6259], 99.50th=[ 6259], 99.90th=[ 8455], 99.95th=[ 9896], 00:30:04.954 | 99.99th=[10028] 00:30:04.954 bw ( KiB/s): min=39712, max=41832, per=100.00%, avg=41056.00, stdev=938.17, samples=4 00:30:04.954 iops : min= 9928, max=10458, avg=10264.00, stdev=234.54, samples=4 00:30:04.954 write: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(80.5MiB/2006msec); 0 zone resets 00:30:04.954 slat (nsec): min=1397, max=17298, avg=1567.68, stdev=246.20 00:30:04.954 clat (usec): min=2668, max=11070, avg=6188.42, stdev=188.49 00:30:04.954 lat (usec): min=2672, max=11072, avg=6189.99, stdev=188.47 00:30:04.954 clat percentiles (usec): 00:30:04.954 | 1.00th=[ 6128], 5.00th=[ 6128], 10.00th=[ 6128], 20.00th=[ 6194], 00:30:04.954 | 30.00th=[ 6194], 40.00th=[ 6194], 50.00th=[ 6194], 60.00th=[ 6194], 00:30:04.954 | 70.00th=[ 6194], 80.00th=[ 6194], 90.00th=[ 6194], 95.00th=[ 6259], 00:30:04.954 | 99.00th=[ 6259], 99.50th=[ 6259], 99.90th=[ 9503], 99.95th=[10028], 00:30:04.954 | 99.99th=[11076] 00:30:04.954 bw ( KiB/s): min=40224, max=41464, per=99.96%, avg=41056.00, stdev=575.15, samples=4 00:30:04.954 iops : min=10056, max=10366, avg=10264.00, stdev=143.79, samples=4 00:30:04.954 lat (msec) : 4=0.02%, 10=99.94%, 20=0.04% 00:30:04.954 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=3 00:30:04.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:04.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:04.954 issued rwts: total=20586,20597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:04.954 00:30:04.954 Run status group 0 (all jobs): 00:30:04.954 READ: bw=40.1MiB/s (42.0MB/s), 40.1MiB/s-40.1MiB/s (42.0MB/s-42.0MB/s), io=80.4MiB (84.3MB), run=2006-2006msec 00:30:04.954 WRITE: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=80.5MiB (84.4MB), run=2006-2006msec 00:30:04.954 04:46:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:05.213 04:46:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:05.213 04:46:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:13.335 04:46:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:13.335 04:46:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:18.613 04:46:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:18.613 04:46:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:21.901 rmmod nvme_rdma 00:30:21.901 rmmod nvme_fabrics 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 447879 ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 447879 ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447879' 00:30:21.901 killing process with pid 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 447879 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:21.901 00:30:21.901 real 0m50.439s 00:30:21.901 user 3m37.426s 00:30:21.901 sys 0m8.155s 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.901 ************************************ 00:30:21.901 END TEST nvmf_fio_host 00:30:21.901 ************************************ 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.901 ************************************ 00:30:21.901 START TEST nvmf_failover 00:30:21.901 ************************************ 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:21.901 * Looking for test storage... 00:30:21.901 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.901 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.161 --rc genhtml_branch_coverage=1 00:30:22.161 --rc genhtml_function_coverage=1 00:30:22.161 --rc genhtml_legend=1 00:30:22.161 --rc geninfo_all_blocks=1 00:30:22.161 --rc geninfo_unexecuted_blocks=1 00:30:22.161 00:30:22.161 ' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.161 --rc genhtml_branch_coverage=1 00:30:22.161 --rc genhtml_function_coverage=1 00:30:22.161 --rc genhtml_legend=1 00:30:22.161 --rc geninfo_all_blocks=1 00:30:22.161 --rc geninfo_unexecuted_blocks=1 00:30:22.161 00:30:22.161 ' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.161 --rc genhtml_branch_coverage=1 00:30:22.161 --rc genhtml_function_coverage=1 00:30:22.161 --rc genhtml_legend=1 00:30:22.161 --rc geninfo_all_blocks=1 00:30:22.161 --rc geninfo_unexecuted_blocks=1 00:30:22.161 00:30:22.161 ' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.161 --rc genhtml_branch_coverage=1 00:30:22.161 --rc genhtml_function_coverage=1 00:30:22.161 --rc genhtml_legend=1 00:30:22.161 --rc geninfo_all_blocks=1 00:30:22.161 --rc geninfo_unexecuted_blocks=1 00:30:22.161 00:30:22.161 ' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:22.161 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:22.161 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:30:22.162 04:47:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:30.287 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:30.288 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:30.288 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:30.288 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:30.288 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:30.288 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.288 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:30.288 altname enp217s0f0np0 00:30:30.288 altname ens818f0np0 00:30:30.288 inet 192.168.100.8/24 scope global mlx_0_0 00:30:30.288 valid_lft forever preferred_lft forever 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:30.288 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.288 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:30.288 altname enp217s0f1np1 00:30:30.288 altname ens818f1np1 00:30:30.288 inet 192.168.100.9/24 scope global mlx_0_1 00:30:30.288 valid_lft forever preferred_lft forever 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.288 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:30.289 192.168.100.9' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:30.289 192.168.100.9' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:30.289 192.168.100.9' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=458914 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 458914 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 458914 ']' 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.289 04:47:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.289 [2024-11-17 04:47:08.022897] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:30.289 [2024-11-17 04:47:08.022953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.289 [2024-11-17 04:47:08.113102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:30.289 [2024-11-17 04:47:08.134839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.289 [2024-11-17 04:47:08.134876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.289 [2024-11-17 04:47:08.134885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.289 [2024-11-17 04:47:08.134894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.289 [2024-11-17 04:47:08.134901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.289 [2024-11-17 04:47:08.136485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.289 [2024-11-17 04:47:08.136597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.289 [2024-11-17 04:47:08.136598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:30.289 [2024-11-17 04:47:08.468188] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86e3d0/0x872880) succeed. 00:30:30.289 [2024-11-17 04:47:08.477293] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x86f970/0x8b3f20) succeed. 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:30.289 Malloc0 00:30:30.289 04:47:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:30.289 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.553 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:30.554 [2024-11-17 04:47:09.366978] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:30.812 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:30.812 [2024-11-17 04:47:09.571443] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:30.812 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:31.071 [2024-11-17 04:47:09.764110] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:31.071 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=459292 00:30:31.071 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:31.071 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.071 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 459292 /var/tmp/bdevperf.sock 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 459292 ']' 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.072 04:47:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.330 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.331 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:31.331 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:31.589 NVMe0n1 00:30:31.589 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:31.849 00:30:31.849 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:31.849 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=459343 00:30:31.849 04:47:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:32.786 04:47:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:33.046 04:47:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:36.336 04:47:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:36.336 00:30:36.336 04:47:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:36.595 04:47:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:39.884 04:47:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:39.884 [2024-11-17 04:47:18.438700] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:39.884 04:47:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:40.839 04:47:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:41.098 04:47:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 459343 00:30:47.679 { 00:30:47.679 "results": [ 00:30:47.679 { 00:30:47.679 "job": "NVMe0n1", 00:30:47.679 "core_mask": "0x1", 00:30:47.679 "workload": "verify", 00:30:47.679 "status": "finished", 00:30:47.679 "verify_range": { 00:30:47.679 "start": 0, 00:30:47.679 "length": 16384 00:30:47.679 }, 00:30:47.679 "queue_depth": 128, 00:30:47.679 "io_size": 4096, 00:30:47.679 "runtime": 15.004612, 00:30:47.679 "iops": 14510.338554572421, 00:30:47.679 "mibps": 56.68100997879852, 00:30:47.679 "io_failed": 4579, 00:30:47.679 "io_timeout": 0, 00:30:47.679 "avg_latency_us": 8616.822580094557, 00:30:47.679 "min_latency_us": 342.4256, 00:30:47.679 "max_latency_us": 1033476.5056 00:30:47.679 } 00:30:47.679 ], 00:30:47.679 "core_count": 1 00:30:47.679 } 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 459292 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 459292 ']' 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 459292 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459292 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459292' 00:30:47.679 killing process with pid 459292 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 459292 00:30:47.679 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 459292 00:30:47.680 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:47.680 [2024-11-17 04:47:09.840566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:47.680 [2024-11-17 04:47:09.840626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459292 ] 00:30:47.680 [2024-11-17 04:47:09.931138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.680 [2024-11-17 04:47:09.953363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.680 Running I/O for 15 seconds... 00:30:47.680 18304.00 IOPS, 71.50 MiB/s [2024-11-17T03:47:26.510Z] 10048.00 IOPS, 39.25 MiB/s [2024-11-17T03:47:26.510Z] [2024-11-17 04:47:12.777161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.680 [2024-11-17 04:47:12.777197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10577 cdw0:0 sqhd:644a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.777209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.680 [2024-11-17 04:47:12.777219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10577 cdw0:0 sqhd:644a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.777230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.680 [2024-11-17 04:47:12.777239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10577 cdw0:0 sqhd:644a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.777250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.680 [2024-11-17 04:47:12.777259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:10577 cdw0:0 sqhd:644a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:47.680 [2024-11-17 04:47:12.779223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:47.680 [2024-11-17 04:47:12.779242] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:47.680 [2024-11-17 04:47:12.779252] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:30:47.680 [2024-11-17 04:47:12.779272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.779960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.779994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.680 [2024-11-17 04:47:12.780475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.680 [2024-11-17 04:47:12.780485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.780963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.780975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.681 [2024-11-17 04:47:12.781901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182200 00:30:47.681 [2024-11-17 04:47:12.781947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.781981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182200 00:30:47.681 [2024-11-17 04:47:12.781992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182200 00:30:47.681 [2024-11-17 04:47:12.782034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.681 [2024-11-17 04:47:12.782066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.782971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.782981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.682 [2024-11-17 04:47:12.783508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182200 00:30:47.682 [2024-11-17 04:47:12.783518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.783970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.783980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.784464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182200 00:30:47.683 [2024-11-17 04:47:12.784474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10577 cdw0:87649000 sqhd:c46a p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.798890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:47.683 [2024-11-17 04:47:12.798908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:47.683 [2024-11-17 04:47:12.798917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30192 len:8 PRP1 0x0 PRP2 0x0 00:30:47.683 [2024-11-17 04:47:12.798928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:12.799000] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:47.683 [2024-11-17 04:47:12.799038] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:47.683 [2024-11-17 04:47:12.801736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:47.683 [2024-11-17 04:47:12.838754] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:47.683 11757.33 IOPS, 45.93 MiB/s [2024-11-17T03:47:26.513Z] 13391.50 IOPS, 52.31 MiB/s [2024-11-17T03:47:26.513Z] 12743.00 IOPS, 49.78 MiB/s [2024-11-17T03:47:26.513Z] [2024-11-17 04:47:16.246487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180e00 00:30:47.683 [2024-11-17 04:47:16.246524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180e00 00:30:47.683 [2024-11-17 04:47:16.246551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180e00 00:30:47.683 [2024-11-17 04:47:16.246581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180e00 00:30:47.683 [2024-11-17 04:47:16.246602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180e00 00:30:47.683 [2024-11-17 04:47:16.246622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.683 [2024-11-17 04:47:16.246643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.683 [2024-11-17 04:47:16.246663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.683 [2024-11-17 04:47:16.246684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.683 [2024-11-17 04:47:16.246703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.683 [2024-11-17 04:47:16.246714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.246723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.246745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.246962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.246982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.246992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.684 [2024-11-17 04:47:16.247284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.247304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.247326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180e00 00:30:47.684 [2024-11-17 04:47:16.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.684 [2024-11-17 04:47:16.247356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.247904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.247988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.247999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.248008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.248021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.248030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.248040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.248049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.248060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180e00 00:30:47.685 [2024-11-17 04:47:16.248070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.248080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.685 [2024-11-17 04:47:16.248089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.685 [2024-11-17 04:47:16.248099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180e00 00:30:47.686 [2024-11-17 04:47:16.248732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.686 [2024-11-17 04:47:16.248832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.686 [2024-11-17 04:47:16.248842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.687 [2024-11-17 04:47:16.248851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.687 [2024-11-17 04:47:16.248870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.687 [2024-11-17 04:47:16.248890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.248909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.248930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.248955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.248975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.248985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.248994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.249004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.249014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.249024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.249033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.249043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180e00 00:30:47.687 [2024-11-17 04:47:16.249052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.249063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.687 [2024-11-17 04:47:16.249072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.249082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.687 [2024-11-17 04:47:16.249091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10579 cdw0:87649000 sqhd:600e p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.250901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:47.687 [2024-11-17 04:47:16.250915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:47.687 [2024-11-17 04:47:16.250923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130112 len:8 PRP1 0x0 PRP2 0x0 00:30:47.687 [2024-11-17 04:47:16.250937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:16.250980] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:30:47.687 [2024-11-17 04:47:16.250992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:47.687 [2024-11-17 04:47:16.253717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:47.687 [2024-11-17 04:47:16.268490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:30:47.687 [2024-11-17 04:47:16.310124] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:47.687 11747.17 IOPS, 45.89 MiB/s [2024-11-17T03:47:26.517Z] 12713.86 IOPS, 49.66 MiB/s [2024-11-17T03:47:26.517Z] 13444.25 IOPS, 52.52 MiB/s [2024-11-17T03:47:26.517Z] 13923.78 IOPS, 54.39 MiB/s [2024-11-17T03:47:26.517Z] [2024-11-17 04:47:20.653999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182200 00:30:47.687 [2024-11-17 04:47:20.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.687 [2024-11-17 04:47:20.654440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182200 00:30:47.688 [2024-11-17 04:47:20.654716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.654981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.654992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.655001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.655031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.655040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.655052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.655061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.655072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.688 [2024-11-17 04:47:20.655081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.688 [2024-11-17 04:47:20.655092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182200 00:30:47.689 [2024-11-17 04:47:20.655220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182200 00:30:47.689 [2024-11-17 04:47:20.655240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.689 [2024-11-17 04:47:20.655846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.689 [2024-11-17 04:47:20.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.655875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.655894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.655943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.655963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.655983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.655994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182200 00:30:47.690 [2024-11-17 04:47:20.656488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.656597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.690 [2024-11-17 04:47:20.656606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10582 cdw0:87649000 sqhd:3cda p:0 m:0 dnr:0 00:30:47.690 [2024-11-17 04:47:20.658653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:47.691 [2024-11-17 04:47:20.658667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:47.691 [2024-11-17 04:47:20.658676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111664 len:8 PRP1 0x0 PRP2 0x0 00:30:47.691 [2024-11-17 04:47:20.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.691 [2024-11-17 04:47:20.658731] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:30:47.691 [2024-11-17 04:47:20.658743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:47.691 [2024-11-17 04:47:20.661519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:47.691 [2024-11-17 04:47:20.675617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:30:47.691 12531.40 IOPS, 48.95 MiB/s [2024-11-17T03:47:26.521Z] [2024-11-17 04:47:20.712549] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:47.691 13040.27 IOPS, 50.94 MiB/s [2024-11-17T03:47:26.521Z] 13499.50 IOPS, 52.73 MiB/s [2024-11-17T03:47:26.521Z] 13888.77 IOPS, 54.25 MiB/s [2024-11-17T03:47:26.521Z] 14220.21 IOPS, 55.55 MiB/s [2024-11-17T03:47:26.521Z] 14510.00 IOPS, 56.68 MiB/s 00:30:47.691 Latency(us) 00:30:47.691 [2024-11-17T03:47:26.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.691 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:47.691 Verification LBA range: start 0x0 length 0x4000 00:30:47.691 NVMe0n1 : 15.00 14510.34 56.68 305.17 0.00 8616.82 342.43 1033476.51 00:30:47.691 [2024-11-17T03:47:26.521Z] =================================================================================================================== 00:30:47.691 [2024-11-17T03:47:26.521Z] Total : 14510.34 56.68 305.17 0.00 8616.82 342.43 1033476.51 00:30:47.691 Received shutdown signal, test time was about 15.000000 seconds 00:30:47.691 00:30:47.691 Latency(us) 00:30:47.691 [2024-11-17T03:47:26.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.691 [2024-11-17T03:47:26.521Z] =================================================================================================================== 00:30:47.691 [2024-11-17T03:47:26.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=461997 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 461997 /var/tmp/bdevperf.sock 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 461997 ']' 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.691 04:47:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:47.691 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.691 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:47.691 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:47.691 [2024-11-17 04:47:26.393873] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:47.691 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:47.951 [2024-11-17 04:47:26.582493] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:47.951 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:48.209 NVMe0n1 00:30:48.209 04:47:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:48.469 00:30:48.469 04:47:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:48.728 00:30:48.728 04:47:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:48.728 04:47:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:48.987 04:47:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.987 04:47:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:52.279 04:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.279 04:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:52.279 04:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=462804 00:30:52.279 04:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:52.279 04:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 462804 00:30:53.659 { 00:30:53.659 "results": [ 00:30:53.659 { 00:30:53.659 "job": "NVMe0n1", 00:30:53.659 "core_mask": "0x1", 00:30:53.659 "workload": "verify", 00:30:53.659 "status": "finished", 00:30:53.659 "verify_range": { 00:30:53.659 "start": 0, 00:30:53.659 "length": 16384 00:30:53.659 }, 00:30:53.659 "queue_depth": 128, 00:30:53.659 "io_size": 4096, 00:30:53.659 "runtime": 1.007106, 00:30:53.659 "iops": 18174.849519315743, 00:30:53.659 "mibps": 70.99550593482712, 00:30:53.659 "io_failed": 0, 00:30:53.659 "io_timeout": 0, 00:30:53.659 "avg_latency_us": 7007.001779020979, 00:30:53.659 "min_latency_us": 2215.1168, 00:30:53.659 "max_latency_us": 18245.2224 00:30:53.659 } 00:30:53.659 ], 00:30:53.659 "core_count": 1 00:30:53.659 } 00:30:53.659 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.659 [2024-11-17 04:47:26.018530] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:53.659 [2024-11-17 04:47:26.018589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461997 ] 00:30:53.659 [2024-11-17 04:47:26.110070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.659 [2024-11-17 04:47:26.129370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.659 [2024-11-17 04:47:27.751999] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:53.659 [2024-11-17 04:47:27.752573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:53.659 [2024-11-17 04:47:27.752604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:53.659 [2024-11-17 04:47:27.775129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:30:53.659 [2024-11-17 04:47:27.791121] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:53.659 Running I/O for 1 seconds... 00:30:53.659 18149.00 IOPS, 70.89 MiB/s 00:30:53.659 Latency(us) 00:30:53.659 [2024-11-17T03:47:32.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.659 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.659 Verification LBA range: start 0x0 length 0x4000 00:30:53.659 NVMe0n1 : 1.01 18174.85 71.00 0.00 0.00 7007.00 2215.12 18245.22 00:30:53.659 [2024-11-17T03:47:32.489Z] =================================================================================================================== 00:30:53.659 [2024-11-17T03:47:32.489Z] Total : 18174.85 71.00 0.00 0.00 7007.00 2215.12 18245.22 00:30:53.659 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.659 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:53.659 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.918 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:53.918 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.918 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.177 04:47:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:57.467 04:47:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.467 04:47:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 461997 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 461997 ']' 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 461997 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 461997 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 461997' 00:30:57.467 killing process with pid 461997 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 461997 00:30:57.467 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 461997 00:30:57.727 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:57.727 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.727 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:57.727 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:57.986 rmmod nvme_rdma 00:30:57.986 rmmod nvme_fabrics 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 458914 ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 458914 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 458914 ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 458914 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458914 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458914' 00:30:57.986 killing process with pid 458914 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 458914 00:30:57.986 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 458914 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:58.246 00:30:58.246 real 0m36.353s 00:30:58.246 user 1m58.651s 00:30:58.246 sys 0m7.762s 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:58.246 ************************************ 00:30:58.246 END TEST nvmf_failover 00:30:58.246 ************************************ 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.246 ************************************ 00:30:58.246 START TEST nvmf_host_discovery 00:30:58.246 ************************************ 00:30:58.246 04:47:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:58.506 * Looking for test storage... 00:30:58.506 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.506 --rc genhtml_branch_coverage=1 00:30:58.506 --rc genhtml_function_coverage=1 00:30:58.506 --rc genhtml_legend=1 00:30:58.506 --rc geninfo_all_blocks=1 00:30:58.506 --rc geninfo_unexecuted_blocks=1 00:30:58.506 00:30:58.506 ' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.506 --rc genhtml_branch_coverage=1 00:30:58.506 --rc genhtml_function_coverage=1 00:30:58.506 --rc genhtml_legend=1 00:30:58.506 --rc geninfo_all_blocks=1 00:30:58.506 --rc geninfo_unexecuted_blocks=1 00:30:58.506 00:30:58.506 ' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.506 --rc genhtml_branch_coverage=1 00:30:58.506 --rc genhtml_function_coverage=1 00:30:58.506 --rc genhtml_legend=1 00:30:58.506 --rc geninfo_all_blocks=1 00:30:58.506 --rc geninfo_unexecuted_blocks=1 00:30:58.506 00:30:58.506 ' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.506 --rc genhtml_branch_coverage=1 00:30:58.506 --rc genhtml_function_coverage=1 00:30:58.506 --rc genhtml_legend=1 00:30:58.506 --rc geninfo_all_blocks=1 00:30:58.506 --rc geninfo_unexecuted_blocks=1 00:30:58.506 00:30:58.506 ' 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.506 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.507 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:58.507 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:58.507 00:30:58.507 real 0m0.236s 00:30:58.507 user 0m0.135s 00:30:58.507 sys 0m0.117s 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.507 ************************************ 00:30:58.507 END TEST nvmf_host_discovery 00:30:58.507 ************************************ 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.507 ************************************ 00:30:58.507 START TEST nvmf_host_multipath_status 00:30:58.507 ************************************ 00:30:58.507 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:58.767 * Looking for test storage... 00:30:58.767 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:58.767 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:58.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.768 --rc genhtml_branch_coverage=1 00:30:58.768 --rc genhtml_function_coverage=1 00:30:58.768 --rc genhtml_legend=1 00:30:58.768 --rc geninfo_all_blocks=1 00:30:58.768 --rc geninfo_unexecuted_blocks=1 00:30:58.768 00:30:58.768 ' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:58.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.768 --rc genhtml_branch_coverage=1 00:30:58.768 --rc genhtml_function_coverage=1 00:30:58.768 --rc genhtml_legend=1 00:30:58.768 --rc geninfo_all_blocks=1 00:30:58.768 --rc geninfo_unexecuted_blocks=1 00:30:58.768 00:30:58.768 ' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:58.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.768 --rc genhtml_branch_coverage=1 00:30:58.768 --rc genhtml_function_coverage=1 00:30:58.768 --rc genhtml_legend=1 00:30:58.768 --rc geninfo_all_blocks=1 00:30:58.768 --rc geninfo_unexecuted_blocks=1 00:30:58.768 00:30:58.768 ' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:58.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.768 --rc genhtml_branch_coverage=1 00:30:58.768 --rc genhtml_function_coverage=1 00:30:58.768 --rc genhtml_legend=1 00:30:58.768 --rc geninfo_all_blocks=1 00:30:58.768 --rc geninfo_unexecuted_blocks=1 00:30:58.768 00:30:58.768 ' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.768 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.768 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.769 04:47:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:06.894 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:06.894 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:06.895 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:06.895 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:06.895 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:06.895 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:06.895 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:06.895 altname enp217s0f0np0 00:31:06.895 altname ens818f0np0 00:31:06.895 inet 192.168.100.8/24 scope global mlx_0_0 00:31:06.895 valid_lft forever preferred_lft forever 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:06.895 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:06.895 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:06.895 altname enp217s0f1np1 00:31:06.895 altname ens818f1np1 00:31:06.895 inet 192.168.100.9/24 scope global mlx_0_1 00:31:06.895 valid_lft forever preferred_lft forever 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:06.895 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:06.896 192.168.100.9' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:06.896 192.168.100.9' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:06.896 192.168.100.9' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=467103 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 467103 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 467103 ']' 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.896 [2024-11-17 04:47:44.779479] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:06.896 [2024-11-17 04:47:44.779529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.896 [2024-11-17 04:47:44.868373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:06.896 [2024-11-17 04:47:44.889835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.896 [2024-11-17 04:47:44.889866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.896 [2024-11-17 04:47:44.889876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.896 [2024-11-17 04:47:44.889885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.896 [2024-11-17 04:47:44.889892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.896 [2024-11-17 04:47:44.891204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.896 [2024-11-17 04:47:44.891203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.896 04:47:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=467103 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:06.896 [2024-11-17 04:47:45.214428] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff6590/0x1ffaa40) succeed. 00:31:06.896 [2024-11-17 04:47:45.223327] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff7a90/0x203c0e0) succeed. 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:06.896 Malloc0 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:06.896 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.156 04:47:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:07.415 [2024-11-17 04:47:46.070431] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:07.415 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:07.675 [2024-11-17 04:47:46.278771] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=467401 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 467401 /var/tmp/bdevperf.sock 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 467401 ']' 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:07.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.675 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.934 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.934 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:07.934 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:07.934 04:47:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:08.193 Nvme0n1 00:31:08.453 04:47:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:08.453 Nvme0n1 00:31:08.712 04:47:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:08.712 04:47:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:10.617 04:47:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:10.617 04:47:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:10.876 04:47:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:10.876 04:47:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.255 04:47:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.515 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.774 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.774 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.774 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.774 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.033 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.033 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:13.033 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.033 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.292 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.292 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:13.292 04:47:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:13.292 04:47:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:13.552 04:47:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:14.488 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:14.488 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:14.488 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.488 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.747 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.747 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:14.747 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.747 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.006 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.006 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.006 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.006 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.266 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.266 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.266 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.266 04:47:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.266 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.266 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.266 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.266 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.525 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.525 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.525 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.525 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.785 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.785 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:15.785 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:16.045 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:16.045 04:47:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:17.423 04:47:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:17.423 04:47:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:17.423 04:47:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.423 04:47:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.423 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.682 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:17.941 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.941 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:17.941 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.941 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.200 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.200 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:18.200 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.200 04:47:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.200 04:47:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.200 04:47:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:18.200 04:47:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:18.459 04:47:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:18.718 04:47:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:19.654 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:19.654 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:19.654 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.654 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:19.913 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.913 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:19.913 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.913 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.172 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.172 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.172 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.172 04:47:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.432 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:20.690 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.690 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:20.690 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.690 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:20.949 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.949 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:20.949 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:21.207 04:47:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:21.207 04:48:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.586 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.846 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:23.105 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.105 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:23.105 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.105 04:48:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:23.365 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.365 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:23.365 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.365 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:23.624 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.624 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:23.624 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:23.624 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:23.884 04:48:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:24.823 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:24.823 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:24.823 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.823 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.082 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.082 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:25.082 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.082 04:48:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.341 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.341 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.341 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.341 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.600 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.600 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.600 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.600 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.859 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.118 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.118 04:48:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:26.377 04:48:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:26.377 04:48:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:26.636 04:48:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:26.636 04:48:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.013 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.272 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.272 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.272 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.272 04:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.272 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.272 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.272 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.272 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.531 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.531 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.532 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.532 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.790 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.790 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.790 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.790 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.049 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.049 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:29.049 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:29.049 04:48:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:29.308 04:48:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:30.245 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:30.245 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:30.245 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.245 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.505 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.505 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.505 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.505 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.770 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.770 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.770 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.770 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.028 04:48:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.287 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.287 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.288 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.288 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.547 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.547 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:31.547 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:31.806 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:32.065 04:48:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:33.003 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:33.003 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.003 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.003 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.263 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.263 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:33.263 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.263 04:48:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.263 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.263 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.263 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.263 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.523 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.523 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.523 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.523 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.782 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.782 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:33.782 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.782 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:34.042 04:48:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:34.301 04:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:34.560 04:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:35.498 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:35.498 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:35.498 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.498 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.758 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.758 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:35.758 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.758 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.017 04:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.276 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.276 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:36.276 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.276 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.535 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.535 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:36.535 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.535 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 467401 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 467401 ']' 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 467401 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467401 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467401' 00:31:36.794 killing process with pid 467401 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 467401 00:31:36.794 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 467401 00:31:36.794 { 00:31:36.794 "results": [ 00:31:36.794 { 00:31:36.794 "job": "Nvme0n1", 00:31:36.794 "core_mask": "0x4", 00:31:36.794 "workload": "verify", 00:31:36.794 "status": "terminated", 00:31:36.794 "verify_range": { 00:31:36.794 "start": 0, 00:31:36.794 "length": 16384 00:31:36.794 }, 00:31:36.794 "queue_depth": 128, 00:31:36.794 "io_size": 4096, 00:31:36.794 "runtime": 28.034215, 00:31:36.794 "iops": 16248.216688072058, 00:31:36.794 "mibps": 63.469596437781476, 00:31:36.794 "io_failed": 0, 00:31:36.794 "io_timeout": 0, 00:31:36.794 "avg_latency_us": 7856.296089352941, 00:31:36.795 "min_latency_us": 1101.0048, 00:31:36.795 "max_latency_us": 3019898.88 00:31:36.795 } 00:31:36.795 ], 00:31:36.795 "core_count": 1 00:31:36.795 } 00:31:37.060 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 467401 00:31:37.060 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.060 [2024-11-17 04:47:46.353437] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:37.060 [2024-11-17 04:47:46.353496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467401 ] 00:31:37.060 [2024-11-17 04:47:46.446672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.060 [2024-11-17 04:47:46.468907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.060 Running I/O for 90 seconds... 00:31:37.060 18944.00 IOPS, 74.00 MiB/s [2024-11-17T03:48:15.890Z] 19072.00 IOPS, 74.50 MiB/s [2024-11-17T03:48:15.890Z] 18976.67 IOPS, 74.13 MiB/s [2024-11-17T03:48:15.890Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-17T03:48:15.890Z] 18931.60 IOPS, 73.95 MiB/s [2024-11-17T03:48:15.890Z] 18972.50 IOPS, 74.11 MiB/s [2024-11-17T03:48:15.890Z] 18990.00 IOPS, 74.18 MiB/s [2024-11-17T03:48:15.890Z] 18999.62 IOPS, 74.22 MiB/s [2024-11-17T03:48:15.890Z] 19001.11 IOPS, 74.22 MiB/s [2024-11-17T03:48:15.890Z] 18993.10 IOPS, 74.19 MiB/s [2024-11-17T03:48:15.890Z] 18999.91 IOPS, 74.22 MiB/s [2024-11-17T03:48:15.890Z] 19002.17 IOPS, 74.23 MiB/s [2024-11-17T03:48:15.890Z] [2024-11-17 04:47:59.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180c00 00:31:37.060 [2024-11-17 04:47:59.807429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x180c00 00:31:37.060 [2024-11-17 04:47:59.807476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180c00 00:31:37.060 [2024-11-17 04:47:59.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:31:37.060 [2024-11-17 04:47:59.807520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:31:37.060 [2024-11-17 04:47:59.807625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:37.060 [2024-11-17 04:47:59.807747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.060 [2024-11-17 04:47:59.807756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.807987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.807995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.061 [2024-11-17 04:47:59.808450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:37.061 [2024-11-17 04:47:59.808541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180c00 00:31:37.061 [2024-11-17 04:47:59.808550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.808894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.808914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.808938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.808958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.808998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.809018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.062 [2024-11-17 04:47:59.809037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180c00 00:31:37.062 [2024-11-17 04:47:59.809282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:37.062 [2024-11-17 04:47:59.809293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.809976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.809992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.810692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.810718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:31:37.063 [2024-11-17 04:47:59.810744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:37.063 [2024-11-17 04:47:59.810762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.063 [2024-11-17 04:47:59.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:47:59.810789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:47:59.810798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:37.064 18117.08 IOPS, 70.77 MiB/s [2024-11-17T03:48:15.894Z] 16823.00 IOPS, 65.71 MiB/s [2024-11-17T03:48:15.894Z] 15701.47 IOPS, 61.33 MiB/s [2024-11-17T03:48:15.894Z] 15440.81 IOPS, 60.32 MiB/s [2024-11-17T03:48:15.894Z] 15658.06 IOPS, 61.16 MiB/s [2024-11-17T03:48:15.894Z] 15805.33 IOPS, 61.74 MiB/s [2024-11-17T03:48:15.894Z] 15771.42 IOPS, 61.61 MiB/s [2024-11-17T03:48:15.894Z] 15741.60 IOPS, 61.49 MiB/s [2024-11-17T03:48:15.894Z] 15824.71 IOPS, 61.82 MiB/s [2024-11-17T03:48:15.894Z] 15988.14 IOPS, 62.45 MiB/s [2024-11-17T03:48:15.894Z] 16119.52 IOPS, 62.97 MiB/s [2024-11-17T03:48:15.894Z] 16105.08 IOPS, 62.91 MiB/s [2024-11-17T03:48:15.894Z] 16060.24 IOPS, 62.74 MiB/s [2024-11-17T03:48:15.894Z] [2024-11-17 04:48:13.201166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.201950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.201983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.201992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.064 [2024-11-17 04:48:13.202268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180c00 00:31:37.064 [2024-11-17 04:48:13.202308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:37.064 [2024-11-17 04:48:13.202319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.202973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.202985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.202994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.203014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.203035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.203055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.203075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180c00 00:31:37.065 [2024-11-17 04:48:13.203096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:37.065 [2024-11-17 04:48:13.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.065 [2024-11-17 04:48:13.203119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.203130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.203140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.204573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.204597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.204949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.204982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.204992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:31:37.066 [2024-11-17 04:48:13.205661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:37.066 [2024-11-17 04:48:13.205672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.066 [2024-11-17 04:48:13.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.205963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.205984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.205995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.067 [2024-11-17 04:48:13.206445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:31:37.067 [2024-11-17 04:48:13.206465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:37.067 [2024-11-17 04:48:13.206477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.206507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.206622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.206630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.208455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.208502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.208564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.208986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.208995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.209289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.209363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.209372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.218564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x180c00 00:31:37.068 [2024-11-17 04:48:13.218578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.218593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.218604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:37.068 [2024-11-17 04:48:13.218619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.068 [2024-11-17 04:48:13.218629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.218990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.219207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.219329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.219378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.069 [2024-11-17 04:48:13.219476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:31:37.069 [2024-11-17 04:48:13.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:37.069 [2024-11-17 04:48:13.219565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.219575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.219588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.219599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.219612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.219636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.219647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.219671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.221794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.221983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.221994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.222043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.222139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.222189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.222213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:31:37.070 [2024-11-17 04:48:13.222238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:37.070 [2024-11-17 04:48:13.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.070 [2024-11-17 04:48:13.222262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:37.070 16042.88 IOPS, 62.67 MiB/s [2024-11-17T03:48:15.901Z] 16151.78 IOPS, 63.09 MiB/s [2024-11-17T03:48:15.901Z] 16250.32 IOPS, 63.48 MiB/s [2024-11-17T03:48:15.901Z] Received shutdown signal, test time was about 28.034829 seconds 00:31:37.071 00:31:37.071 Latency(us) 00:31:37.071 [2024-11-17T03:48:15.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.071 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:37.071 Verification LBA range: start 0x0 length 0x4000 00:31:37.071 Nvme0n1 : 28.03 16248.22 63.47 0.00 0.00 7856.30 1101.00 3019898.88 00:31:37.071 [2024-11-17T03:48:15.901Z] =================================================================================================================== 00:31:37.071 [2024-11-17T03:48:15.901Z] Total : 16248.22 63.47 0.00 0.00 7856.30 1101.00 3019898.88 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.071 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:37.330 rmmod nvme_rdma 00:31:37.330 rmmod nvme_fabrics 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 467103 ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 467103 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 467103 ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 467103 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467103 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467103' 00:31:37.330 killing process with pid 467103 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 467103 00:31:37.330 04:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 467103 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:37.589 00:31:37.589 real 0m38.922s 00:31:37.589 user 1m49.937s 00:31:37.589 sys 0m9.565s 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.589 ************************************ 00:31:37.589 END TEST nvmf_host_multipath_status 00:31:37.589 ************************************ 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.589 ************************************ 00:31:37.589 START TEST nvmf_discovery_remove_ifc 00:31:37.589 ************************************ 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:37.589 * Looking for test storage... 00:31:37.589 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:37.589 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.849 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.850 --rc genhtml_branch_coverage=1 00:31:37.850 --rc genhtml_function_coverage=1 00:31:37.850 --rc genhtml_legend=1 00:31:37.850 --rc geninfo_all_blocks=1 00:31:37.850 --rc geninfo_unexecuted_blocks=1 00:31:37.850 00:31:37.850 ' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.850 --rc genhtml_branch_coverage=1 00:31:37.850 --rc genhtml_function_coverage=1 00:31:37.850 --rc genhtml_legend=1 00:31:37.850 --rc geninfo_all_blocks=1 00:31:37.850 --rc geninfo_unexecuted_blocks=1 00:31:37.850 00:31:37.850 ' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.850 --rc genhtml_branch_coverage=1 00:31:37.850 --rc genhtml_function_coverage=1 00:31:37.850 --rc genhtml_legend=1 00:31:37.850 --rc geninfo_all_blocks=1 00:31:37.850 --rc geninfo_unexecuted_blocks=1 00:31:37.850 00:31:37.850 ' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.850 --rc genhtml_branch_coverage=1 00:31:37.850 --rc genhtml_function_coverage=1 00:31:37.850 --rc genhtml_legend=1 00:31:37.850 --rc geninfo_all_blocks=1 00:31:37.850 --rc geninfo_unexecuted_blocks=1 00:31:37.850 00:31:37.850 ' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:37.850 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:37.850 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:31:37.850 00:31:37.850 real 0m0.230s 00:31:37.850 user 0m0.121s 00:31:37.850 sys 0m0.128s 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.850 ************************************ 00:31:37.850 END TEST nvmf_discovery_remove_ifc 00:31:37.850 ************************************ 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.850 ************************************ 00:31:37.850 START TEST nvmf_identify_kernel_target 00:31:37.850 ************************************ 00:31:37.850 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:38.111 * Looking for test storage... 00:31:38.111 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:38.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.111 --rc genhtml_branch_coverage=1 00:31:38.111 --rc genhtml_function_coverage=1 00:31:38.111 --rc genhtml_legend=1 00:31:38.111 --rc geninfo_all_blocks=1 00:31:38.111 --rc geninfo_unexecuted_blocks=1 00:31:38.111 00:31:38.111 ' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:38.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.111 --rc genhtml_branch_coverage=1 00:31:38.111 --rc genhtml_function_coverage=1 00:31:38.111 --rc genhtml_legend=1 00:31:38.111 --rc geninfo_all_blocks=1 00:31:38.111 --rc geninfo_unexecuted_blocks=1 00:31:38.111 00:31:38.111 ' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:38.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.111 --rc genhtml_branch_coverage=1 00:31:38.111 --rc genhtml_function_coverage=1 00:31:38.111 --rc genhtml_legend=1 00:31:38.111 --rc geninfo_all_blocks=1 00:31:38.111 --rc geninfo_unexecuted_blocks=1 00:31:38.111 00:31:38.111 ' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:38.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.111 --rc genhtml_branch_coverage=1 00:31:38.111 --rc genhtml_function_coverage=1 00:31:38.111 --rc genhtml_legend=1 00:31:38.111 --rc geninfo_all_blocks=1 00:31:38.111 --rc geninfo_unexecuted_blocks=1 00:31:38.111 00:31:38.111 ' 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.111 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:38.128 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.128 04:48:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:46.252 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:46.252 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:46.252 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:46.252 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:46.252 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:46.253 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:46.253 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:46.253 altname enp217s0f0np0 00:31:46.253 altname ens818f0np0 00:31:46.253 inet 192.168.100.8/24 scope global mlx_0_0 00:31:46.253 valid_lft forever preferred_lft forever 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:46.253 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:46.253 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:46.253 altname enp217s0f1np1 00:31:46.253 altname ens818f1np1 00:31:46.253 inet 192.168.100.9/24 scope global mlx_0_1 00:31:46.253 valid_lft forever preferred_lft forever 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:46.253 04:48:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:46.253 192.168.100.9' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:46.253 192.168.100.9' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:46.253 192.168.100.9' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:46.253 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:46.254 04:48:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:48.799 Waiting for block devices as requested 00:31:48.799 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:48.799 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:49.058 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:49.058 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:49.058 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:49.318 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:49.318 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:49.318 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:49.577 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:49.577 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:49.577 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:49.836 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:49.836 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:49.836 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:50.098 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:50.098 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:50.098 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:50.357 No valid GPT data, bailing 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:50.357 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:50.617 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:31:50.617 00:31:50.617 Discovery Log Number of Records 2, Generation counter 2 00:31:50.617 =====Discovery Log Entry 0====== 00:31:50.617 trtype: rdma 00:31:50.617 adrfam: ipv4 00:31:50.617 subtype: current discovery subsystem 00:31:50.617 treq: not specified, sq flow control disable supported 00:31:50.617 portid: 1 00:31:50.617 trsvcid: 4420 00:31:50.617 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:50.617 traddr: 192.168.100.8 00:31:50.617 eflags: none 00:31:50.617 rdma_prtype: not specified 00:31:50.617 rdma_qptype: connected 00:31:50.617 rdma_cms: rdma-cm 00:31:50.617 rdma_pkey: 0x0000 00:31:50.617 =====Discovery Log Entry 1====== 00:31:50.617 trtype: rdma 00:31:50.617 adrfam: ipv4 00:31:50.617 subtype: nvme subsystem 00:31:50.617 treq: not specified, sq flow control disable supported 00:31:50.617 portid: 1 00:31:50.617 trsvcid: 4420 00:31:50.617 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:50.617 traddr: 192.168.100.8 00:31:50.617 eflags: none 00:31:50.617 rdma_prtype: not specified 00:31:50.617 rdma_qptype: connected 00:31:50.617 rdma_cms: rdma-cm 00:31:50.617 rdma_pkey: 0x0000 00:31:50.617 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:31:50.617 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:50.876 ===================================================== 00:31:50.876 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:50.876 ===================================================== 00:31:50.876 Controller Capabilities/Features 00:31:50.876 ================================ 00:31:50.876 Vendor ID: 0000 00:31:50.876 Subsystem Vendor ID: 0000 00:31:50.876 Serial Number: 6fdc57c967908b221c86 00:31:50.876 Model Number: Linux 00:31:50.876 Firmware Version: 6.8.9-20 00:31:50.876 Recommended Arb Burst: 0 00:31:50.876 IEEE OUI Identifier: 00 00 00 00:31:50.876 Multi-path I/O 00:31:50.876 May have multiple subsystem ports: No 00:31:50.876 May have multiple controllers: No 00:31:50.876 Associated with SR-IOV VF: No 00:31:50.876 Max Data Transfer Size: Unlimited 00:31:50.876 Max Number of Namespaces: 0 00:31:50.876 Max Number of I/O Queues: 1024 00:31:50.876 NVMe Specification Version (VS): 1.3 00:31:50.876 NVMe Specification Version (Identify): 1.3 00:31:50.876 Maximum Queue Entries: 128 00:31:50.876 Contiguous Queues Required: No 00:31:50.876 Arbitration Mechanisms Supported 00:31:50.876 Weighted Round Robin: Not Supported 00:31:50.876 Vendor Specific: Not Supported 00:31:50.876 Reset Timeout: 7500 ms 00:31:50.876 Doorbell Stride: 4 bytes 00:31:50.876 NVM Subsystem Reset: Not Supported 00:31:50.876 Command Sets Supported 00:31:50.876 NVM Command Set: Supported 00:31:50.876 Boot Partition: Not Supported 00:31:50.876 Memory Page Size Minimum: 4096 bytes 00:31:50.876 Memory Page Size Maximum: 4096 bytes 00:31:50.876 Persistent Memory Region: Not Supported 00:31:50.876 Optional Asynchronous Events Supported 00:31:50.876 Namespace Attribute Notices: Not Supported 00:31:50.876 Firmware Activation Notices: Not Supported 00:31:50.876 ANA Change Notices: Not Supported 00:31:50.876 PLE Aggregate Log Change Notices: Not Supported 00:31:50.876 LBA Status Info Alert Notices: Not Supported 00:31:50.876 EGE Aggregate Log Change Notices: Not Supported 00:31:50.876 Normal NVM Subsystem Shutdown event: Not Supported 00:31:50.876 Zone Descriptor Change Notices: Not Supported 00:31:50.876 Discovery Log Change Notices: Supported 00:31:50.876 Controller Attributes 00:31:50.876 128-bit Host Identifier: Not Supported 00:31:50.876 Non-Operational Permissive Mode: Not Supported 00:31:50.876 NVM Sets: Not Supported 00:31:50.876 Read Recovery Levels: Not Supported 00:31:50.876 Endurance Groups: Not Supported 00:31:50.876 Predictable Latency Mode: Not Supported 00:31:50.876 Traffic Based Keep ALive: Not Supported 00:31:50.876 Namespace Granularity: Not Supported 00:31:50.876 SQ Associations: Not Supported 00:31:50.876 UUID List: Not Supported 00:31:50.876 Multi-Domain Subsystem: Not Supported 00:31:50.876 Fixed Capacity Management: Not Supported 00:31:50.876 Variable Capacity Management: Not Supported 00:31:50.876 Delete Endurance Group: Not Supported 00:31:50.876 Delete NVM Set: Not Supported 00:31:50.876 Extended LBA Formats Supported: Not Supported 00:31:50.876 Flexible Data Placement Supported: Not Supported 00:31:50.876 00:31:50.876 Controller Memory Buffer Support 00:31:50.876 ================================ 00:31:50.876 Supported: No 00:31:50.876 00:31:50.876 Persistent Memory Region Support 00:31:50.876 ================================ 00:31:50.876 Supported: No 00:31:50.876 00:31:50.876 Admin Command Set Attributes 00:31:50.876 ============================ 00:31:50.876 Security Send/Receive: Not Supported 00:31:50.876 Format NVM: Not Supported 00:31:50.876 Firmware Activate/Download: Not Supported 00:31:50.876 Namespace Management: Not Supported 00:31:50.876 Device Self-Test: Not Supported 00:31:50.876 Directives: Not Supported 00:31:50.876 NVMe-MI: Not Supported 00:31:50.876 Virtualization Management: Not Supported 00:31:50.876 Doorbell Buffer Config: Not Supported 00:31:50.876 Get LBA Status Capability: Not Supported 00:31:50.876 Command & Feature Lockdown Capability: Not Supported 00:31:50.876 Abort Command Limit: 1 00:31:50.876 Async Event Request Limit: 1 00:31:50.876 Number of Firmware Slots: N/A 00:31:50.876 Firmware Slot 1 Read-Only: N/A 00:31:50.876 Firmware Activation Without Reset: N/A 00:31:50.876 Multiple Update Detection Support: N/A 00:31:50.876 Firmware Update Granularity: No Information Provided 00:31:50.876 Per-Namespace SMART Log: No 00:31:50.876 Asymmetric Namespace Access Log Page: Not Supported 00:31:50.876 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:50.876 Command Effects Log Page: Not Supported 00:31:50.876 Get Log Page Extended Data: Supported 00:31:50.876 Telemetry Log Pages: Not Supported 00:31:50.876 Persistent Event Log Pages: Not Supported 00:31:50.876 Supported Log Pages Log Page: May Support 00:31:50.876 Commands Supported & Effects Log Page: Not Supported 00:31:50.876 Feature Identifiers & Effects Log Page:May Support 00:31:50.876 NVMe-MI Commands & Effects Log Page: May Support 00:31:50.876 Data Area 4 for Telemetry Log: Not Supported 00:31:50.876 Error Log Page Entries Supported: 1 00:31:50.876 Keep Alive: Not Supported 00:31:50.876 00:31:50.876 NVM Command Set Attributes 00:31:50.876 ========================== 00:31:50.876 Submission Queue Entry Size 00:31:50.876 Max: 1 00:31:50.876 Min: 1 00:31:50.876 Completion Queue Entry Size 00:31:50.876 Max: 1 00:31:50.876 Min: 1 00:31:50.876 Number of Namespaces: 0 00:31:50.876 Compare Command: Not Supported 00:31:50.876 Write Uncorrectable Command: Not Supported 00:31:50.876 Dataset Management Command: Not Supported 00:31:50.876 Write Zeroes Command: Not Supported 00:31:50.876 Set Features Save Field: Not Supported 00:31:50.876 Reservations: Not Supported 00:31:50.876 Timestamp: Not Supported 00:31:50.876 Copy: Not Supported 00:31:50.876 Volatile Write Cache: Not Present 00:31:50.876 Atomic Write Unit (Normal): 1 00:31:50.876 Atomic Write Unit (PFail): 1 00:31:50.876 Atomic Compare & Write Unit: 1 00:31:50.876 Fused Compare & Write: Not Supported 00:31:50.876 Scatter-Gather List 00:31:50.876 SGL Command Set: Supported 00:31:50.876 SGL Keyed: Supported 00:31:50.876 SGL Bit Bucket Descriptor: Not Supported 00:31:50.876 SGL Metadata Pointer: Not Supported 00:31:50.876 Oversized SGL: Not Supported 00:31:50.876 SGL Metadata Address: Not Supported 00:31:50.876 SGL Offset: Supported 00:31:50.876 Transport SGL Data Block: Not Supported 00:31:50.876 Replay Protected Memory Block: Not Supported 00:31:50.876 00:31:50.876 Firmware Slot Information 00:31:50.876 ========================= 00:31:50.876 Active slot: 0 00:31:50.876 00:31:50.876 00:31:50.876 Error Log 00:31:50.876 ========= 00:31:50.877 00:31:50.877 Active Namespaces 00:31:50.877 ================= 00:31:50.877 Discovery Log Page 00:31:50.877 ================== 00:31:50.877 Generation Counter: 2 00:31:50.877 Number of Records: 2 00:31:50.877 Record Format: 0 00:31:50.877 00:31:50.877 Discovery Log Entry 0 00:31:50.877 ---------------------- 00:31:50.877 Transport Type: 1 (RDMA) 00:31:50.877 Address Family: 1 (IPv4) 00:31:50.877 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:50.877 Entry Flags: 00:31:50.877 Duplicate Returned Information: 0 00:31:50.877 Explicit Persistent Connection Support for Discovery: 0 00:31:50.877 Transport Requirements: 00:31:50.877 Secure Channel: Not Specified 00:31:50.877 Port ID: 1 (0x0001) 00:31:50.877 Controller ID: 65535 (0xffff) 00:31:50.877 Admin Max SQ Size: 32 00:31:50.877 Transport Service Identifier: 4420 00:31:50.877 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:50.877 Transport Address: 192.168.100.8 00:31:50.877 Transport Specific Address Subtype - RDMA 00:31:50.877 RDMA QP Service Type: 1 (Reliable Connected) 00:31:50.877 RDMA Provider Type: 1 (No provider specified) 00:31:50.877 RDMA CM Service: 1 (RDMA_CM) 00:31:50.877 Discovery Log Entry 1 00:31:50.877 ---------------------- 00:31:50.877 Transport Type: 1 (RDMA) 00:31:50.877 Address Family: 1 (IPv4) 00:31:50.877 Subsystem Type: 2 (NVM Subsystem) 00:31:50.877 Entry Flags: 00:31:50.877 Duplicate Returned Information: 0 00:31:50.877 Explicit Persistent Connection Support for Discovery: 0 00:31:50.877 Transport Requirements: 00:31:50.877 Secure Channel: Not Specified 00:31:50.877 Port ID: 1 (0x0001) 00:31:50.877 Controller ID: 65535 (0xffff) 00:31:50.877 Admin Max SQ Size: 32 00:31:50.877 Transport Service Identifier: 4420 00:31:50.877 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:50.877 Transport Address: 192.168.100.8 00:31:50.877 Transport Specific Address Subtype - RDMA 00:31:50.877 RDMA QP Service Type: 1 (Reliable Connected) 00:31:50.877 RDMA Provider Type: 1 (No provider specified) 00:31:50.877 RDMA CM Service: 1 (RDMA_CM) 00:31:50.877 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.877 get_feature(0x01) failed 00:31:50.877 get_feature(0x02) failed 00:31:50.877 get_feature(0x04) failed 00:31:50.877 ===================================================== 00:31:50.877 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.877 ===================================================== 00:31:50.877 Controller Capabilities/Features 00:31:50.877 ================================ 00:31:50.877 Vendor ID: 0000 00:31:50.877 Subsystem Vendor ID: 0000 00:31:50.877 Serial Number: bdf5895ef0e9f7a81487 00:31:50.877 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:50.877 Firmware Version: 6.8.9-20 00:31:50.877 Recommended Arb Burst: 6 00:31:50.877 IEEE OUI Identifier: 00 00 00 00:31:50.877 Multi-path I/O 00:31:50.877 May have multiple subsystem ports: Yes 00:31:50.877 May have multiple controllers: Yes 00:31:50.877 Associated with SR-IOV VF: No 00:31:50.877 Max Data Transfer Size: 1048576 00:31:50.877 Max Number of Namespaces: 1024 00:31:50.877 Max Number of I/O Queues: 128 00:31:50.877 NVMe Specification Version (VS): 1.3 00:31:50.877 NVMe Specification Version (Identify): 1.3 00:31:50.877 Maximum Queue Entries: 128 00:31:50.877 Contiguous Queues Required: No 00:31:50.877 Arbitration Mechanisms Supported 00:31:50.877 Weighted Round Robin: Not Supported 00:31:50.877 Vendor Specific: Not Supported 00:31:50.877 Reset Timeout: 7500 ms 00:31:50.877 Doorbell Stride: 4 bytes 00:31:50.877 NVM Subsystem Reset: Not Supported 00:31:50.877 Command Sets Supported 00:31:50.877 NVM Command Set: Supported 00:31:50.877 Boot Partition: Not Supported 00:31:50.877 Memory Page Size Minimum: 4096 bytes 00:31:50.877 Memory Page Size Maximum: 4096 bytes 00:31:50.877 Persistent Memory Region: Not Supported 00:31:50.877 Optional Asynchronous Events Supported 00:31:50.877 Namespace Attribute Notices: Supported 00:31:50.877 Firmware Activation Notices: Not Supported 00:31:50.877 ANA Change Notices: Supported 00:31:50.877 PLE Aggregate Log Change Notices: Not Supported 00:31:50.877 LBA Status Info Alert Notices: Not Supported 00:31:50.877 EGE Aggregate Log Change Notices: Not Supported 00:31:50.877 Normal NVM Subsystem Shutdown event: Not Supported 00:31:50.877 Zone Descriptor Change Notices: Not Supported 00:31:50.877 Discovery Log Change Notices: Not Supported 00:31:50.877 Controller Attributes 00:31:50.877 128-bit Host Identifier: Supported 00:31:50.877 Non-Operational Permissive Mode: Not Supported 00:31:50.877 NVM Sets: Not Supported 00:31:50.877 Read Recovery Levels: Not Supported 00:31:50.877 Endurance Groups: Not Supported 00:31:50.877 Predictable Latency Mode: Not Supported 00:31:50.877 Traffic Based Keep ALive: Supported 00:31:50.877 Namespace Granularity: Not Supported 00:31:50.877 SQ Associations: Not Supported 00:31:50.877 UUID List: Not Supported 00:31:50.877 Multi-Domain Subsystem: Not Supported 00:31:50.877 Fixed Capacity Management: Not Supported 00:31:50.877 Variable Capacity Management: Not Supported 00:31:50.877 Delete Endurance Group: Not Supported 00:31:50.877 Delete NVM Set: Not Supported 00:31:50.877 Extended LBA Formats Supported: Not Supported 00:31:50.877 Flexible Data Placement Supported: Not Supported 00:31:50.877 00:31:50.877 Controller Memory Buffer Support 00:31:50.877 ================================ 00:31:50.877 Supported: No 00:31:50.877 00:31:50.877 Persistent Memory Region Support 00:31:50.877 ================================ 00:31:50.877 Supported: No 00:31:50.877 00:31:50.877 Admin Command Set Attributes 00:31:50.877 ============================ 00:31:50.877 Security Send/Receive: Not Supported 00:31:50.877 Format NVM: Not Supported 00:31:50.877 Firmware Activate/Download: Not Supported 00:31:50.877 Namespace Management: Not Supported 00:31:50.877 Device Self-Test: Not Supported 00:31:50.877 Directives: Not Supported 00:31:50.877 NVMe-MI: Not Supported 00:31:50.877 Virtualization Management: Not Supported 00:31:50.877 Doorbell Buffer Config: Not Supported 00:31:50.877 Get LBA Status Capability: Not Supported 00:31:50.877 Command & Feature Lockdown Capability: Not Supported 00:31:50.877 Abort Command Limit: 4 00:31:50.877 Async Event Request Limit: 4 00:31:50.877 Number of Firmware Slots: N/A 00:31:50.877 Firmware Slot 1 Read-Only: N/A 00:31:50.877 Firmware Activation Without Reset: N/A 00:31:50.877 Multiple Update Detection Support: N/A 00:31:50.877 Firmware Update Granularity: No Information Provided 00:31:50.877 Per-Namespace SMART Log: Yes 00:31:50.877 Asymmetric Namespace Access Log Page: Supported 00:31:50.877 ANA Transition Time : 10 sec 00:31:50.877 00:31:50.877 Asymmetric Namespace Access Capabilities 00:31:50.877 ANA Optimized State : Supported 00:31:50.877 ANA Non-Optimized State : Supported 00:31:50.877 ANA Inaccessible State : Supported 00:31:50.877 ANA Persistent Loss State : Supported 00:31:50.877 ANA Change State : Supported 00:31:50.877 ANAGRPID is not changed : No 00:31:50.877 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:50.877 00:31:50.877 ANA Group Identifier Maximum : 128 00:31:50.877 Number of ANA Group Identifiers : 128 00:31:50.877 Max Number of Allowed Namespaces : 1024 00:31:50.877 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:50.877 Command Effects Log Page: Supported 00:31:50.877 Get Log Page Extended Data: Supported 00:31:50.877 Telemetry Log Pages: Not Supported 00:31:50.877 Persistent Event Log Pages: Not Supported 00:31:50.877 Supported Log Pages Log Page: May Support 00:31:50.877 Commands Supported & Effects Log Page: Not Supported 00:31:50.877 Feature Identifiers & Effects Log Page:May Support 00:31:50.877 NVMe-MI Commands & Effects Log Page: May Support 00:31:50.877 Data Area 4 for Telemetry Log: Not Supported 00:31:50.877 Error Log Page Entries Supported: 128 00:31:50.877 Keep Alive: Supported 00:31:50.877 Keep Alive Granularity: 1000 ms 00:31:50.877 00:31:50.877 NVM Command Set Attributes 00:31:50.877 ========================== 00:31:50.877 Submission Queue Entry Size 00:31:50.877 Max: 64 00:31:50.877 Min: 64 00:31:50.877 Completion Queue Entry Size 00:31:50.877 Max: 16 00:31:50.877 Min: 16 00:31:50.877 Number of Namespaces: 1024 00:31:50.877 Compare Command: Not Supported 00:31:50.877 Write Uncorrectable Command: Not Supported 00:31:50.877 Dataset Management Command: Supported 00:31:50.877 Write Zeroes Command: Supported 00:31:50.878 Set Features Save Field: Not Supported 00:31:50.878 Reservations: Not Supported 00:31:50.878 Timestamp: Not Supported 00:31:50.878 Copy: Not Supported 00:31:50.878 Volatile Write Cache: Present 00:31:50.878 Atomic Write Unit (Normal): 1 00:31:50.878 Atomic Write Unit (PFail): 1 00:31:50.878 Atomic Compare & Write Unit: 1 00:31:50.878 Fused Compare & Write: Not Supported 00:31:50.878 Scatter-Gather List 00:31:50.878 SGL Command Set: Supported 00:31:50.878 SGL Keyed: Supported 00:31:50.878 SGL Bit Bucket Descriptor: Not Supported 00:31:50.878 SGL Metadata Pointer: Not Supported 00:31:50.878 Oversized SGL: Not Supported 00:31:50.878 SGL Metadata Address: Not Supported 00:31:50.878 SGL Offset: Supported 00:31:50.878 Transport SGL Data Block: Not Supported 00:31:50.878 Replay Protected Memory Block: Not Supported 00:31:50.878 00:31:50.878 Firmware Slot Information 00:31:50.878 ========================= 00:31:50.878 Active slot: 0 00:31:50.878 00:31:50.878 Asymmetric Namespace Access 00:31:50.878 =========================== 00:31:50.878 Change Count : 0 00:31:50.878 Number of ANA Group Descriptors : 1 00:31:50.878 ANA Group Descriptor : 0 00:31:50.878 ANA Group ID : 1 00:31:50.878 Number of NSID Values : 1 00:31:50.878 Change Count : 0 00:31:50.878 ANA State : 1 00:31:50.878 Namespace Identifier : 1 00:31:50.878 00:31:50.878 Commands Supported and Effects 00:31:50.878 ============================== 00:31:50.878 Admin Commands 00:31:50.878 -------------- 00:31:50.878 Get Log Page (02h): Supported 00:31:50.878 Identify (06h): Supported 00:31:50.878 Abort (08h): Supported 00:31:50.878 Set Features (09h): Supported 00:31:50.878 Get Features (0Ah): Supported 00:31:50.878 Asynchronous Event Request (0Ch): Supported 00:31:50.878 Keep Alive (18h): Supported 00:31:50.878 I/O Commands 00:31:50.878 ------------ 00:31:50.878 Flush (00h): Supported 00:31:50.878 Write (01h): Supported LBA-Change 00:31:50.878 Read (02h): Supported 00:31:50.878 Write Zeroes (08h): Supported LBA-Change 00:31:50.878 Dataset Management (09h): Supported 00:31:50.878 00:31:50.878 Error Log 00:31:50.878 ========= 00:31:50.878 Entry: 0 00:31:50.878 Error Count: 0x3 00:31:50.878 Submission Queue Id: 0x0 00:31:50.878 Command Id: 0x5 00:31:50.878 Phase Bit: 0 00:31:50.878 Status Code: 0x2 00:31:50.878 Status Code Type: 0x0 00:31:50.878 Do Not Retry: 1 00:31:50.878 Error Location: 0x28 00:31:50.878 LBA: 0x0 00:31:50.878 Namespace: 0x0 00:31:50.878 Vendor Log Page: 0x0 00:31:50.878 ----------- 00:31:50.878 Entry: 1 00:31:50.878 Error Count: 0x2 00:31:50.878 Submission Queue Id: 0x0 00:31:50.878 Command Id: 0x5 00:31:50.878 Phase Bit: 0 00:31:50.878 Status Code: 0x2 00:31:50.878 Status Code Type: 0x0 00:31:50.878 Do Not Retry: 1 00:31:50.878 Error Location: 0x28 00:31:50.878 LBA: 0x0 00:31:50.878 Namespace: 0x0 00:31:50.878 Vendor Log Page: 0x0 00:31:50.878 ----------- 00:31:50.878 Entry: 2 00:31:50.878 Error Count: 0x1 00:31:50.878 Submission Queue Id: 0x0 00:31:50.878 Command Id: 0x0 00:31:50.878 Phase Bit: 0 00:31:50.878 Status Code: 0x2 00:31:50.878 Status Code Type: 0x0 00:31:50.878 Do Not Retry: 1 00:31:50.878 Error Location: 0x28 00:31:50.878 LBA: 0x0 00:31:50.878 Namespace: 0x0 00:31:50.878 Vendor Log Page: 0x0 00:31:50.878 00:31:50.878 Number of Queues 00:31:50.878 ================ 00:31:50.878 Number of I/O Submission Queues: 128 00:31:50.878 Number of I/O Completion Queues: 128 00:31:50.878 00:31:50.878 ZNS Specific Controller Data 00:31:50.878 ============================ 00:31:50.878 Zone Append Size Limit: 0 00:31:50.878 00:31:50.878 00:31:50.878 Active Namespaces 00:31:50.878 ================= 00:31:50.878 get_feature(0x05) failed 00:31:50.878 Namespace ID:1 00:31:50.878 Command Set Identifier: NVM (00h) 00:31:50.878 Deallocate: Supported 00:31:50.878 Deallocated/Unwritten Error: Not Supported 00:31:50.878 Deallocated Read Value: Unknown 00:31:50.878 Deallocate in Write Zeroes: Not Supported 00:31:50.878 Deallocated Guard Field: 0xFFFF 00:31:50.878 Flush: Supported 00:31:50.878 Reservation: Not Supported 00:31:50.878 Namespace Sharing Capabilities: Multiple Controllers 00:31:50.878 Size (in LBAs): 3907029168 (1863GiB) 00:31:50.878 Capacity (in LBAs): 3907029168 (1863GiB) 00:31:50.878 Utilization (in LBAs): 3907029168 (1863GiB) 00:31:50.878 UUID: 151de433-7861-410b-aba7-07b0f0a94b5f 00:31:50.878 Thin Provisioning: Not Supported 00:31:50.878 Per-NS Atomic Units: Yes 00:31:50.878 Atomic Boundary Size (Normal): 0 00:31:50.878 Atomic Boundary Size (PFail): 0 00:31:50.878 Atomic Boundary Offset: 0 00:31:50.878 NGUID/EUI64 Never Reused: No 00:31:50.878 ANA group ID: 1 00:31:50.878 Namespace Write Protected: No 00:31:50.878 Number of LBA Formats: 1 00:31:50.878 Current LBA Format: LBA Format #00 00:31:50.878 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:50.878 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:50.878 rmmod nvme_rdma 00:31:50.878 rmmod nvme_fabrics 00:31:50.878 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:31:51.137 04:48:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:54.430 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:54.430 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:54.430 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:54.430 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:54.430 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:54.430 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:54.690 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:56.594 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:56.853 00:31:56.853 real 0m18.848s 00:31:56.853 user 0m5.129s 00:31:56.853 sys 0m11.010s 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:56.853 ************************************ 00:31:56.853 END TEST nvmf_identify_kernel_target 00:31:56.853 ************************************ 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.853 ************************************ 00:31:56.853 START TEST nvmf_auth_host 00:31:56.853 ************************************ 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:56.853 * Looking for test storage... 00:31:56.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:56.853 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.118 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.119 --rc genhtml_branch_coverage=1 00:31:57.119 --rc genhtml_function_coverage=1 00:31:57.119 --rc genhtml_legend=1 00:31:57.119 --rc geninfo_all_blocks=1 00:31:57.119 --rc geninfo_unexecuted_blocks=1 00:31:57.119 00:31:57.119 ' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.119 --rc genhtml_branch_coverage=1 00:31:57.119 --rc genhtml_function_coverage=1 00:31:57.119 --rc genhtml_legend=1 00:31:57.119 --rc geninfo_all_blocks=1 00:31:57.119 --rc geninfo_unexecuted_blocks=1 00:31:57.119 00:31:57.119 ' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.119 --rc genhtml_branch_coverage=1 00:31:57.119 --rc genhtml_function_coverage=1 00:31:57.119 --rc genhtml_legend=1 00:31:57.119 --rc geninfo_all_blocks=1 00:31:57.119 --rc geninfo_unexecuted_blocks=1 00:31:57.119 00:31:57.119 ' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.119 --rc genhtml_branch_coverage=1 00:31:57.119 --rc genhtml_function_coverage=1 00:31:57.119 --rc genhtml_legend=1 00:31:57.119 --rc geninfo_all_blocks=1 00:31:57.119 --rc geninfo_unexecuted_blocks=1 00:31:57.119 00:31:57.119 ' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.119 04:48:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:04.140 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:04.140 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:04.140 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:04.141 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:04.141 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:04.141 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:04.141 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:04.141 altname enp217s0f0np0 00:32:04.141 altname ens818f0np0 00:32:04.141 inet 192.168.100.8/24 scope global mlx_0_0 00:32:04.141 valid_lft forever preferred_lft forever 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:04.141 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:04.141 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:04.141 altname enp217s0f1np1 00:32:04.141 altname ens818f1np1 00:32:04.141 inet 192.168.100.9/24 scope global mlx_0_1 00:32:04.141 valid_lft forever preferred_lft forever 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.141 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:04.402 192.168.100.9' 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:04.402 192.168.100.9' 00:32:04.402 04:48:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:04.402 192.168.100.9' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=483080 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 483080 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 483080 ']' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.402 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06640dd28448fcb8668a4b003f0262f2 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QGv 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06640dd28448fcb8668a4b003f0262f2 0 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06640dd28448fcb8668a4b003f0262f2 0 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06640dd28448fcb8668a4b003f0262f2 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QGv 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QGv 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QGv 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=74eccf29a34b6b95b39942142f18a920043d0e298b8c6234e5af97a945aa20dd 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.E78 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 74eccf29a34b6b95b39942142f18a920043d0e298b8c6234e5af97a945aa20dd 3 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 74eccf29a34b6b95b39942142f18a920043d0e298b8c6234e5af97a945aa20dd 3 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=74eccf29a34b6b95b39942142f18a920043d0e298b8c6234e5af97a945aa20dd 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.E78 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.E78 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.E78 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:04.662 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff7927b5ea7b2d0e56cb37471565ca269609991f1af5141a 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5f0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff7927b5ea7b2d0e56cb37471565ca269609991f1af5141a 0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff7927b5ea7b2d0e56cb37471565ca269609991f1af5141a 0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff7927b5ea7b2d0e56cb37471565ca269609991f1af5141a 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5f0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5f0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5f0 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b476081d063e34ada34650040b3674c2c3b0224fd7e33cc3 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fTP 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b476081d063e34ada34650040b3674c2c3b0224fd7e33cc3 2 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b476081d063e34ada34650040b3674c2c3b0224fd7e33cc3 2 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b476081d063e34ada34650040b3674c2c3b0224fd7e33cc3 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fTP 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fTP 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fTP 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72896fc409586d576429e2ee1b5fa149 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2iy 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72896fc409586d576429e2ee1b5fa149 1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72896fc409586d576429e2ee1b5fa149 1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72896fc409586d576429e2ee1b5fa149 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2iy 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2iy 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2iy 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be8b9d2d14bb1dcff94dc737923fbaaa 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.flS 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be8b9d2d14bb1dcff94dc737923fbaaa 1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be8b9d2d14bb1dcff94dc737923fbaaa 1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be8b9d2d14bb1dcff94dc737923fbaaa 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.flS 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.flS 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.flS 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:04.923 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4765855f3e583e7779360a0066afb5b287c464a64a743db0 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4765855f3e583e7779360a0066afb5b287c464a64a743db0 2 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4765855f3e583e7779360a0066afb5b287c464a64a743db0 2 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4765855f3e583e7779360a0066afb5b287c464a64a743db0 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=859733bb848827ce1f0cb77f2f43e618 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OOg 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 859733bb848827ce1f0cb77f2f43e618 0 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 859733bb848827ce1f0cb77f2f43e618 0 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=859733bb848827ce1f0cb77f2f43e618 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OOg 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OOg 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OOg 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0a30c56d27b8f4ccd63e7878b43a957bf0836eab6340f3e7226cd365361a6c8 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XLL 00:32:05.183 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0a30c56d27b8f4ccd63e7878b43a957bf0836eab6340f3e7226cd365361a6c8 3 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0a30c56d27b8f4ccd63e7878b43a957bf0836eab6340f3e7226cd365361a6c8 3 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0a30c56d27b8f4ccd63e7878b43a957bf0836eab6340f3e7226cd365361a6c8 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XLL 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XLL 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.XLL 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 483080 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 483080 ']' 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.184 04:48:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QGv 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.443 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.E78 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.E78 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5f0 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fTP ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fTP 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2iy 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.flS ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.flS 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fom 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OOg ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OOg 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.XLL 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:05.444 04:48:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.737 Waiting for block devices as requested 00:32:08.737 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:08.996 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:08.996 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:08.996 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.255 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.255 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.255 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:09.514 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:09.514 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:09.514 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.514 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.775 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.775 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.775 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:10.034 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:10.034 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:10.034 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:10.972 No valid GPT data, bailing 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:10.972 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:32:11.233 00:32:11.233 Discovery Log Number of Records 2, Generation counter 2 00:32:11.233 =====Discovery Log Entry 0====== 00:32:11.233 trtype: rdma 00:32:11.233 adrfam: ipv4 00:32:11.233 subtype: current discovery subsystem 00:32:11.233 treq: not specified, sq flow control disable supported 00:32:11.233 portid: 1 00:32:11.233 trsvcid: 4420 00:32:11.233 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:11.233 traddr: 192.168.100.8 00:32:11.233 eflags: none 00:32:11.233 rdma_prtype: not specified 00:32:11.233 rdma_qptype: connected 00:32:11.233 rdma_cms: rdma-cm 00:32:11.233 rdma_pkey: 0x0000 00:32:11.233 =====Discovery Log Entry 1====== 00:32:11.233 trtype: rdma 00:32:11.233 adrfam: ipv4 00:32:11.233 subtype: nvme subsystem 00:32:11.233 treq: not specified, sq flow control disable supported 00:32:11.233 portid: 1 00:32:11.233 trsvcid: 4420 00:32:11.233 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:11.234 traddr: 192.168.100.8 00:32:11.234 eflags: none 00:32:11.234 rdma_prtype: not specified 00:32:11.234 rdma_qptype: connected 00:32:11.234 rdma_cms: rdma-cm 00:32:11.234 rdma_pkey: 0x0000 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.234 04:48:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.494 nvme0n1 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.494 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.754 nvme0n1 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.754 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.755 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.015 nvme0n1 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.015 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.275 04:48:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.275 nvme0n1 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.276 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.536 nvme0n1 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.536 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.811 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.812 nvme0n1 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.812 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.078 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.079 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.338 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.339 04:48:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 nvme0n1 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.598 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.599 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.859 nvme0n1 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.859 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.860 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.120 nvme0n1 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.120 04:48:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 nvme0n1 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.380 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.640 nvme0n1 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.640 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.899 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.158 04:48:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.417 nvme0n1 00:32:15.417 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.417 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.417 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.417 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.417 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.677 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.678 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.937 nvme0n1 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.937 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.938 04:48:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.509 nvme0n1 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.509 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 nvme0n1 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.770 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.030 nvme0n1 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.030 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.289 04:48:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.670 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.930 nvme0n1 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.930 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.931 04:48:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.500 nvme0n1 00:32:19.500 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.500 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.500 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.500 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.501 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.761 nvme0n1 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.761 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.022 04:48:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.282 nvme0n1 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.282 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.541 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.800 nvme0n1 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.800 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.060 04:48:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.628 nvme0n1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.628 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.629 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 nvme0n1 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.198 04:49:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.198 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.457 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.027 nvme0n1 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.027 04:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.597 nvme0n1 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.597 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.598 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.166 nvme0n1 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.166 04:49:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.426 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.427 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.687 nvme0n1 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.687 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.688 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 nvme0n1 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.948 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.208 nvme0n1 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.208 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.209 04:49:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.469 nvme0n1 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.469 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.729 nvme0n1 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.729 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.730 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 nvme0n1 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.249 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.250 04:49:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.250 nvme0n1 00:32:26.250 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.250 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.250 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.250 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.250 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.510 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 nvme0n1 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.771 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.031 nvme0n1 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.031 04:49:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.291 nvme0n1 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.291 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.292 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.861 nvme0n1 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.861 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.862 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.122 nvme0n1 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.122 04:49:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.382 nvme0n1 00:32:28.382 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.382 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.382 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.382 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.382 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.642 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.902 nvme0n1 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.902 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.161 nvme0n1 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.161 04:49:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.421 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.681 nvme0n1 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.681 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.941 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 nvme0n1 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 04:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.201 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.201 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.201 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.201 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.461 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.720 nvme0n1 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.720 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.979 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.239 nvme0n1 00:32:31.239 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.239 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.239 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.239 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.239 04:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.239 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.498 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.757 nvme0n1 00:32:31.757 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.757 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.758 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.017 04:49:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.586 nvme0n1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.586 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.155 nvme0n1 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.155 04:49:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.093 nvme0n1 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.093 04:49:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.662 nvme0n1 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.662 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.231 nvme0n1 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.231 04:49:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.231 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.232 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.491 nvme0n1 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.491 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.751 nvme0n1 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.751 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.752 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.752 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 nvme0n1 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.270 04:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.529 nvme0n1 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.529 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.530 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.789 nvme0n1 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.789 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.048 nvme0n1 00:32:37.048 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.048 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.048 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.048 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.048 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.049 04:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 nvme0n1 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.308 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.309 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.567 nvme0n1 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.567 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 nvme0n1 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.086 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.087 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.087 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 nvme0n1 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 04:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.606 nvme0n1 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.606 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.865 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.866 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.125 nvme0n1 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.125 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.126 04:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.385 nvme0n1 00:32:39.385 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.385 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.385 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.385 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:39.386 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:39.645 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.646 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.646 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.904 nvme0n1 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:39.904 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.905 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.164 nvme0n1 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.164 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.165 04:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.732 nvme0n1 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.732 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.733 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.301 nvme0n1 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.301 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.302 04:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.870 nvme0n1 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.870 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.129 nvme0n1 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.129 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:42.388 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.389 04:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.389 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.648 nvme0n1 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.648 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDY2NDBkZDI4NDQ4ZmNiODY2OGE0YjAwM2YwMjYyZjJVbTtE: 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzRlY2NmMjlhMzRiNmI5NWIzOTk0MjE0MmYxOGE5MjAwNDNkMGUyOThiOGM2MjM0ZTVhZjk3YTk0NWFhMjBkZEyoHQI=: 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.908 04:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.474 nvme0n1 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:43.474 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:43.475 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.475 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.475 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 nvme0n1 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.043 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.302 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.303 04:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.871 nvme0n1 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2NTg1NWYzZTU4M2U3Nzc5MzYwYTAwNjZhZmI1YjI4N2M0NjRhNjRhNzQzZGIw8cdtBA==: 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU5NzMzYmI4NDg4MjdjZTFmMGNiNzdmMmY0M2U2MTggoRCq: 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.871 04:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.438 nvme0n1 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.438 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBhMzBjNTZkMjdiOGY0Y2NkNjNlNzg3OGI0M2E5NTdiZjA4MzZlYWI2MzQwZjNlNzIyNmNkMzY1MzYxYTZjOA7b0xo=: 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.698 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.266 nvme0n1 00:32:46.266 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.266 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.266 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.267 04:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.267 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.527 request: 00:32:46.527 { 00:32:46.527 "name": "nvme0", 00:32:46.527 "trtype": "rdma", 00:32:46.527 "traddr": "192.168.100.8", 00:32:46.527 "adrfam": "ipv4", 00:32:46.527 "trsvcid": "4420", 00:32:46.527 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.527 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.527 "prchk_reftag": false, 00:32:46.527 "prchk_guard": false, 00:32:46.527 "hdgst": false, 00:32:46.527 "ddgst": false, 00:32:46.527 "allow_unrecognized_csi": false, 00:32:46.527 "method": "bdev_nvme_attach_controller", 00:32:46.527 "req_id": 1 00:32:46.527 } 00:32:46.527 Got JSON-RPC error response 00:32:46.527 response: 00:32:46.527 { 00:32:46.527 "code": -5, 00:32:46.527 "message": "Input/output error" 00:32:46.527 } 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.527 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.527 request: 00:32:46.527 { 00:32:46.527 "name": "nvme0", 00:32:46.527 "trtype": "rdma", 00:32:46.527 "traddr": "192.168.100.8", 00:32:46.527 "adrfam": "ipv4", 00:32:46.527 "trsvcid": "4420", 00:32:46.528 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.528 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.528 "prchk_reftag": false, 00:32:46.528 "prchk_guard": false, 00:32:46.528 "hdgst": false, 00:32:46.528 "ddgst": false, 00:32:46.528 "dhchap_key": "key2", 00:32:46.528 "allow_unrecognized_csi": false, 00:32:46.528 "method": "bdev_nvme_attach_controller", 00:32:46.528 "req_id": 1 00:32:46.528 } 00:32:46.528 Got JSON-RPC error response 00:32:46.528 response: 00:32:46.528 { 00:32:46.528 "code": -5, 00:32:46.528 "message": "Input/output error" 00:32:46.528 } 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.528 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.788 request: 00:32:46.788 { 00:32:46.788 "name": "nvme0", 00:32:46.788 "trtype": "rdma", 00:32:46.788 "traddr": "192.168.100.8", 00:32:46.788 "adrfam": "ipv4", 00:32:46.788 "trsvcid": "4420", 00:32:46.788 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.788 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.788 "prchk_reftag": false, 00:32:46.788 "prchk_guard": false, 00:32:46.788 "hdgst": false, 00:32:46.788 "ddgst": false, 00:32:46.788 "dhchap_key": "key1", 00:32:46.788 "dhchap_ctrlr_key": "ckey2", 00:32:46.788 "allow_unrecognized_csi": false, 00:32:46.788 "method": "bdev_nvme_attach_controller", 00:32:46.788 "req_id": 1 00:32:46.788 } 00:32:46.788 Got JSON-RPC error response 00:32:46.788 response: 00:32:46.788 { 00:32:46.788 "code": -5, 00:32:46.788 "message": "Input/output error" 00:32:46.788 } 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.788 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.048 nvme0n1 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.048 request: 00:32:47.048 { 00:32:47.048 "name": "nvme0", 00:32:47.048 "dhchap_key": "key1", 00:32:47.048 "dhchap_ctrlr_key": "ckey2", 00:32:47.048 "method": "bdev_nvme_set_keys", 00:32:47.048 "req_id": 1 00:32:47.048 } 00:32:47.048 Got JSON-RPC error response 00:32:47.048 response: 00:32:47.048 { 00:32:47.048 "code": -13, 00:32:47.048 "message": "Permission denied" 00:32:47.048 } 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:47.048 04:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:48.428 04:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY3OTI3YjVlYTdiMmQwZTU2Y2IzNzQ3MTU2NWNhMjY5NjA5OTkxZjFhZjUxNDFhjaniyQ==: 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: ]] 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjQ3NjA4MWQwNjNlMzRhZGEzNDY1MDA0MGIzNjc0YzJjM2IwMjI0ZmQ3ZTMzY2MzReoz/Q==: 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:49.366 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.367 04:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.367 nvme0n1 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzI4OTZmYzQwOTU4NmQ1NzY0MjllMmVlMWI1ZmExNDmupWoE: 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: ]] 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmU4YjlkMmQxNGJiMWRjZmY5NGRjNzM3OTIzZmJhYWHQ6se6: 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.367 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.626 request: 00:32:49.626 { 00:32:49.626 "name": "nvme0", 00:32:49.626 "dhchap_key": "key2", 00:32:49.626 "dhchap_ctrlr_key": "ckey1", 00:32:49.626 "method": "bdev_nvme_set_keys", 00:32:49.626 "req_id": 1 00:32:49.626 } 00:32:49.626 Got JSON-RPC error response 00:32:49.626 response: 00:32:49.626 { 00:32:49.626 "code": -13, 00:32:49.626 "message": "Permission denied" 00:32:49.626 } 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:49.626 04:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:50.562 04:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:51.939 rmmod nvme_rdma 00:32:51.939 rmmod nvme_fabrics 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 483080 ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 483080 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 483080 ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 483080 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483080 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.939 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483080' 00:32:51.939 killing process with pid 483080 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 483080 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 483080 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:32:51.940 04:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:56.133 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:56.133 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:57.515 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:57.515 04:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QGv /tmp/spdk.key-null.5f0 /tmp/spdk.key-sha256.2iy /tmp/spdk.key-sha384.fom /tmp/spdk.key-sha512.XLL /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:32:57.515 04:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:33:00.806 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:00.806 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:00.807 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:01.066 00:33:01.066 real 1m4.182s 00:33:01.066 user 0m57.387s 00:33:01.066 sys 0m16.458s 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.066 ************************************ 00:33:01.066 END TEST nvmf_auth_host 00:33:01.066 ************************************ 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.066 ************************************ 00:33:01.066 START TEST nvmf_bdevperf 00:33:01.066 ************************************ 00:33:01.066 04:49:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:01.326 * Looking for test storage... 00:33:01.326 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:01.326 04:49:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.326 04:49:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.326 04:49:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:01.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.326 --rc genhtml_branch_coverage=1 00:33:01.326 --rc genhtml_function_coverage=1 00:33:01.326 --rc genhtml_legend=1 00:33:01.326 --rc geninfo_all_blocks=1 00:33:01.326 --rc geninfo_unexecuted_blocks=1 00:33:01.326 00:33:01.326 ' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:01.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.326 --rc genhtml_branch_coverage=1 00:33:01.326 --rc genhtml_function_coverage=1 00:33:01.326 --rc genhtml_legend=1 00:33:01.326 --rc geninfo_all_blocks=1 00:33:01.326 --rc geninfo_unexecuted_blocks=1 00:33:01.326 00:33:01.326 ' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:01.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.326 --rc genhtml_branch_coverage=1 00:33:01.326 --rc genhtml_function_coverage=1 00:33:01.326 --rc genhtml_legend=1 00:33:01.326 --rc geninfo_all_blocks=1 00:33:01.326 --rc geninfo_unexecuted_blocks=1 00:33:01.326 00:33:01.326 ' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:01.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.326 --rc genhtml_branch_coverage=1 00:33:01.326 --rc genhtml_function_coverage=1 00:33:01.326 --rc genhtml_legend=1 00:33:01.326 --rc geninfo_all_blocks=1 00:33:01.326 --rc geninfo_unexecuted_blocks=1 00:33:01.326 00:33:01.326 ' 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.326 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.327 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.327 04:49:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.453 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:09.454 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:09.454 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:09.454 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:09.454 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:09.454 04:49:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:09.454 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:09.454 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:09.454 altname enp217s0f0np0 00:33:09.454 altname ens818f0np0 00:33:09.454 inet 192.168.100.8/24 scope global mlx_0_0 00:33:09.454 valid_lft forever preferred_lft forever 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:09.454 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:09.455 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:09.455 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:09.455 altname enp217s0f1np1 00:33:09.455 altname ens818f1np1 00:33:09.455 inet 192.168.100.9/24 scope global mlx_0_1 00:33:09.455 valid_lft forever preferred_lft forever 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:09.455 192.168.100.9' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:09.455 192.168.100.9' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:09.455 192.168.100.9' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=498364 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 498364 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 498364 ']' 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 [2024-11-17 04:49:47.253862] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:09.455 [2024-11-17 04:49:47.253912] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.455 [2024-11-17 04:49:47.347891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:09.455 [2024-11-17 04:49:47.370434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.455 [2024-11-17 04:49:47.370474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.455 [2024-11-17 04:49:47.370483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.455 [2024-11-17 04:49:47.370491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.455 [2024-11-17 04:49:47.370498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.455 [2024-11-17 04:49:47.371951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.455 [2024-11-17 04:49:47.372030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.455 [2024-11-17 04:49:47.372032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 [2024-11-17 04:49:47.532876] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x180e3d0/0x1812880) succeed. 00:33:09.455 [2024-11-17 04:49:47.541868] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x180f970/0x1853f20) succeed. 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 Malloc0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.455 [2024-11-17 04:49:47.690285] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:09.455 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.456 { 00:33:09.456 "params": { 00:33:09.456 "name": "Nvme$subsystem", 00:33:09.456 "trtype": "$TEST_TRANSPORT", 00:33:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.456 "adrfam": "ipv4", 00:33:09.456 "trsvcid": "$NVMF_PORT", 00:33:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.456 "hdgst": ${hdgst:-false}, 00:33:09.456 "ddgst": ${ddgst:-false} 00:33:09.456 }, 00:33:09.456 "method": "bdev_nvme_attach_controller" 00:33:09.456 } 00:33:09.456 EOF 00:33:09.456 )") 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:09.456 04:49:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.456 "params": { 00:33:09.456 "name": "Nvme1", 00:33:09.456 "trtype": "rdma", 00:33:09.456 "traddr": "192.168.100.8", 00:33:09.456 "adrfam": "ipv4", 00:33:09.456 "trsvcid": "4420", 00:33:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:09.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:09.456 "hdgst": false, 00:33:09.456 "ddgst": false 00:33:09.456 }, 00:33:09.456 "method": "bdev_nvme_attach_controller" 00:33:09.456 }' 00:33:09.456 [2024-11-17 04:49:47.742356] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:09.456 [2024-11-17 04:49:47.742403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498394 ] 00:33:09.456 [2024-11-17 04:49:47.834226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.456 [2024-11-17 04:49:47.856610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.456 Running I/O for 1 seconds... 00:33:10.395 18307.00 IOPS, 71.51 MiB/s 00:33:10.395 Latency(us) 00:33:10.395 [2024-11-17T03:49:49.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.395 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:10.395 Verification LBA range: start 0x0 length 0x4000 00:33:10.395 Nvme1n1 : 1.01 18352.06 71.69 0.00 0.00 6934.50 1926.76 10852.76 00:33:10.395 [2024-11-17T03:49:49.225Z] =================================================================================================================== 00:33:10.395 [2024-11-17T03:49:49.225Z] Total : 18352.06 71.69 0.00 0.00 6934.50 1926.76 10852.76 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=498664 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.395 { 00:33:10.395 "params": { 00:33:10.395 "name": "Nvme$subsystem", 00:33:10.395 "trtype": "$TEST_TRANSPORT", 00:33:10.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.395 "adrfam": "ipv4", 00:33:10.395 "trsvcid": "$NVMF_PORT", 00:33:10.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.395 "hdgst": ${hdgst:-false}, 00:33:10.395 "ddgst": ${ddgst:-false} 00:33:10.395 }, 00:33:10.395 "method": "bdev_nvme_attach_controller" 00:33:10.395 } 00:33:10.395 EOF 00:33:10.395 )") 00:33:10.395 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:10.654 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:10.654 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:10.654 04:49:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.654 "params": { 00:33:10.654 "name": "Nvme1", 00:33:10.654 "trtype": "rdma", 00:33:10.654 "traddr": "192.168.100.8", 00:33:10.654 "adrfam": "ipv4", 00:33:10.654 "trsvcid": "4420", 00:33:10.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.654 "hdgst": false, 00:33:10.654 "ddgst": false 00:33:10.654 }, 00:33:10.654 "method": "bdev_nvme_attach_controller" 00:33:10.654 }' 00:33:10.654 [2024-11-17 04:49:49.257610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:10.654 [2024-11-17 04:49:49.257666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498664 ] 00:33:10.654 [2024-11-17 04:49:49.350104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.654 [2024-11-17 04:49:49.369870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.913 Running I/O for 15 seconds... 00:33:12.788 18216.00 IOPS, 71.16 MiB/s [2024-11-17T03:49:52.556Z] 18340.00 IOPS, 71.64 MiB/s [2024-11-17T03:49:52.556Z] 04:49:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 498364 00:33:13.726 04:49:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:14.555 16341.33 IOPS, 63.83 MiB/s [2024-11-17T03:49:53.385Z] [2024-11-17 04:49:53.240531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.555 [2024-11-17 04:49:53.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.555 [2024-11-17 04:49:53.240582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.555 [2024-11-17 04:49:53.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.555 [2024-11-17 04:49:53.240603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.555 [2024-11-17 04:49:53.240612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.555 [2024-11-17 04:49:53.240623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.555 [2024-11-17 04:49:53.240633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.555 [2024-11-17 04:49:53.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.555 [2024-11-17 04:49:53.240652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.240987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.240996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.556 [2024-11-17 04:49:53.241391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.556 [2024-11-17 04:49:53.241401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.241984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.241994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.557 [2024-11-17 04:49:53.242132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.557 [2024-11-17 04:49:53.242140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.558 [2024-11-17 04:49:53.242870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.558 [2024-11-17 04:49:53.242881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.559 [2024-11-17 04:49:53.242889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.242900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.559 [2024-11-17 04:49:53.242909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.242919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.559 [2024-11-17 04:49:53.242928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.242944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182200 00:33:14.559 [2024-11-17 04:49:53.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.242963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182200 00:33:14.559 [2024-11-17 04:49:53.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.242983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182200 00:33:14.559 [2024-11-17 04:49:53.243003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.243013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182200 00:33:14.559 [2024-11-17 04:49:53.243022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.243032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182200 00:33:14.559 [2024-11-17 04:49:53.243041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:10670 cdw0:4ead4000 sqhd:cf72 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.244845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:14.559 [2024-11-17 04:49:53.244863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:14.559 [2024-11-17 04:49:53.244872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130088 len:8 PRP1 0x0 PRP2 0x0 00:33:14.559 [2024-11-17 04:49:53.244882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.559 [2024-11-17 04:49:53.247699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.559 [2024-11-17 04:49:53.261487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:14.559 [2024-11-17 04:49:53.264175] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:14.559 [2024-11-17 04:49:53.264201] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:14.559 [2024-11-17 04:49:53.264209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:33:15.756 12256.00 IOPS, 47.88 MiB/s [2024-11-17T03:49:54.586Z] [2024-11-17 04:49:54.268190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:15.756 [2024-11-17 04:49:54.268250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.756 [2024-11-17 04:49:54.268837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.756 [2024-11-17 04:49:54.268872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.756 [2024-11-17 04:49:54.268902] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:15.756 [2024-11-17 04:49:54.268952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.756 [2024-11-17 04:49:54.274458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.756 [2024-11-17 04:49:54.277168] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:15.756 [2024-11-17 04:49:54.277188] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:15.756 [2024-11-17 04:49:54.277195] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:33:16.692 9804.80 IOPS, 38.30 MiB/s [2024-11-17T03:49:55.522Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 498364 Killed "${NVMF_APP[@]}" "$@" 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=499724 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 499724 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 499724 ']' 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.692 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.693 [2024-11-17 04:49:55.281176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:16.693 [2024-11-17 04:49:55.281199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.693 [2024-11-17 04:49:55.281366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.693 [2024-11-17 04:49:55.281376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.693 [2024-11-17 04:49:55.281385] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:16.693 [2024-11-17 04:49:55.281397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.693 [2024-11-17 04:49:55.282874] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:16.693 [2024-11-17 04:49:55.282919] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.693 [2024-11-17 04:49:55.286740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.693 [2024-11-17 04:49:55.289330] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:16.693 [2024-11-17 04:49:55.289351] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:16.693 [2024-11-17 04:49:55.289359] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:33:16.693 [2024-11-17 04:49:55.377178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.693 [2024-11-17 04:49:55.398399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.693 [2024-11-17 04:49:55.398433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.693 [2024-11-17 04:49:55.398442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.693 [2024-11-17 04:49:55.398450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.693 [2024-11-17 04:49:55.398457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.693 [2024-11-17 04:49:55.399955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.693 [2024-11-17 04:49:55.400068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.693 [2024-11-17 04:49:55.400069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.693 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.952 8170.67 IOPS, 31.92 MiB/s [2024-11-17T03:49:55.782Z] [2024-11-17 04:49:55.573913] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13e43d0/0x13e8880) succeed. 00:33:16.952 [2024-11-17 04:49:55.582978] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13e5970/0x1429f20) succeed. 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:16.952 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.953 Malloc0 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.953 [2024-11-17 04:49:55.729109] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.953 04:49:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 498664 00:33:17.522 [2024-11-17 04:49:56.293392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:17.522 [2024-11-17 04:49:56.293420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.522 [2024-11-17 04:49:56.293592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.522 [2024-11-17 04:49:56.293602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.522 [2024-11-17 04:49:56.293612] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:17.522 [2024-11-17 04:49:56.293625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.522 [2024-11-17 04:49:56.298034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.522 [2024-11-17 04:49:56.339068] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:19.161 7586.00 IOPS, 29.63 MiB/s [2024-11-17T03:49:58.929Z] 8925.62 IOPS, 34.87 MiB/s [2024-11-17T03:49:59.866Z] 9967.67 IOPS, 38.94 MiB/s [2024-11-17T03:50:00.804Z] 10801.30 IOPS, 42.19 MiB/s [2024-11-17T03:50:01.742Z] 11483.91 IOPS, 44.86 MiB/s [2024-11-17T03:50:02.680Z] 12052.00 IOPS, 47.08 MiB/s [2024-11-17T03:50:03.618Z] 12533.15 IOPS, 48.96 MiB/s [2024-11-17T03:50:04.999Z] 12945.36 IOPS, 50.57 MiB/s [2024-11-17T03:50:04.999Z] 13302.40 IOPS, 51.96 MiB/s 00:33:26.169 Latency(us) 00:33:26.169 [2024-11-17T03:50:04.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:26.169 Verification LBA range: start 0x0 length 0x4000 00:33:26.169 Nvme1n1 : 15.01 13304.46 51.97 10675.95 0.00 5318.33 332.60 1026765.62 00:33:26.169 [2024-11-17T03:50:04.999Z] =================================================================================================================== 00:33:26.169 [2024-11-17T03:50:04.999Z] Total : 13304.46 51.97 10675.95 0.00 5318.33 332.60 1026765.62 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:26.169 rmmod nvme_rdma 00:33:26.169 rmmod nvme_fabrics 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 499724 ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 499724 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 499724 ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 499724 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 499724 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 499724' 00:33:26.169 killing process with pid 499724 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 499724 00:33:26.169 04:50:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 499724 00:33:26.428 04:50:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.428 04:50:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:26.428 00:33:26.428 real 0m25.317s 00:33:26.428 user 1m2.345s 00:33:26.428 sys 0m6.670s 00:33:26.428 04:50:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.428 04:50:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.428 ************************************ 00:33:26.428 END TEST nvmf_bdevperf 00:33:26.429 ************************************ 00:33:26.429 04:50:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:26.429 04:50:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:26.429 04:50:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.429 04:50:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.429 ************************************ 00:33:26.429 START TEST nvmf_target_disconnect 00:33:26.429 ************************************ 00:33:26.429 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:26.688 * Looking for test storage... 00:33:26.688 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.688 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:26.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.689 --rc genhtml_branch_coverage=1 00:33:26.689 --rc genhtml_function_coverage=1 00:33:26.689 --rc genhtml_legend=1 00:33:26.689 --rc geninfo_all_blocks=1 00:33:26.689 --rc geninfo_unexecuted_blocks=1 00:33:26.689 00:33:26.689 ' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:26.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.689 --rc genhtml_branch_coverage=1 00:33:26.689 --rc genhtml_function_coverage=1 00:33:26.689 --rc genhtml_legend=1 00:33:26.689 --rc geninfo_all_blocks=1 00:33:26.689 --rc geninfo_unexecuted_blocks=1 00:33:26.689 00:33:26.689 ' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:26.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.689 --rc genhtml_branch_coverage=1 00:33:26.689 --rc genhtml_function_coverage=1 00:33:26.689 --rc genhtml_legend=1 00:33:26.689 --rc geninfo_all_blocks=1 00:33:26.689 --rc geninfo_unexecuted_blocks=1 00:33:26.689 00:33:26.689 ' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:26.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.689 --rc genhtml_branch_coverage=1 00:33:26.689 --rc genhtml_function_coverage=1 00:33:26.689 --rc genhtml_legend=1 00:33:26.689 --rc geninfo_all_blocks=1 00:33:26.689 --rc geninfo_unexecuted_blocks=1 00:33:26.689 00:33:26.689 ' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:26.689 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.689 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.690 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:26.690 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:26.690 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.690 04:50:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:34.818 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:34.818 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:34.818 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:34.818 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:34.818 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:34.819 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:34.819 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:34.819 altname enp217s0f0np0 00:33:34.819 altname ens818f0np0 00:33:34.819 inet 192.168.100.8/24 scope global mlx_0_0 00:33:34.819 valid_lft forever preferred_lft forever 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:34.819 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:34.819 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:34.819 altname enp217s0f1np1 00:33:34.819 altname ens818f1np1 00:33:34.819 inet 192.168.100.9/24 scope global mlx_0_1 00:33:34.819 valid_lft forever preferred_lft forever 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:34.819 192.168.100.9' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:34.819 192.168.100.9' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:34.819 192.168.100.9' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:34.819 ************************************ 00:33:34.819 START TEST nvmf_target_disconnect_tc1 00:33:34.819 ************************************ 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:33:34.819 04:50:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:34.819 [2024-11-17 04:50:12.878522] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:34.819 [2024-11-17 04:50:12.878610] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:34.819 [2024-11-17 04:50:12.878635] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:33:35.078 [2024-11-17 04:50:13.882670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:33:35.078 [2024-11-17 04:50:13.882728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:33:35.078 [2024-11-17 04:50:13.882740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:33:35.078 [2024-11-17 04:50:13.882766] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:35.078 [2024-11-17 04:50:13.882775] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:35.078 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:33:35.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:35.078 Initializing NVMe Controllers 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:35.078 00:33:35.078 real 0m1.158s 00:33:35.078 user 0m0.852s 00:33:35.078 sys 0m0.296s 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.078 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:35.078 ************************************ 00:33:35.078 END TEST nvmf_target_disconnect_tc1 00:33:35.078 ************************************ 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:35.337 ************************************ 00:33:35.337 START TEST nvmf_target_disconnect_tc2 00:33:35.337 ************************************ 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=504828 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 504828 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 504828 ']' 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.337 04:50:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.337 [2024-11-17 04:50:14.044313] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:35.337 [2024-11-17 04:50:14.044367] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.337 [2024-11-17 04:50:14.140152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:35.337 [2024-11-17 04:50:14.162247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.337 [2024-11-17 04:50:14.162287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.337 [2024-11-17 04:50:14.162300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.337 [2024-11-17 04:50:14.162310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.337 [2024-11-17 04:50:14.162317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.337 [2024-11-17 04:50:14.164200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:35.337 [2024-11-17 04:50:14.164309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:35.337 [2024-11-17 04:50:14.164417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:35.337 [2024-11-17 04:50:14.164419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:35.596 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.596 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:35.596 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:35.596 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.596 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.597 Malloc0 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.597 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.597 [2024-11-17 04:50:14.379236] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6df640/0x6eb2d0) succeed. 00:33:35.597 [2024-11-17 04:50:14.388833] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6e0c80/0x72c970) succeed. 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.856 [2024-11-17 04:50:14.532106] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.856 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=505066 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:35.857 04:50:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:37.763 04:50:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 504828 00:33:37.763 04:50:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Write completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 Read completed with error (sct=0, sc=8) 00:33:39.143 starting I/O failed 00:33:39.143 [2024-11-17 04:50:17.752542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.080 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 504828 Killed "${NVMF_APP[@]}" "$@" 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=505617 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 505617 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 505617 ']' 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.080 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.080 [2024-11-17 04:50:18.613825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:40.080 [2024-11-17 04:50:18.613880] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.080 [2024-11-17 04:50:18.704435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.080 [2024-11-17 04:50:18.726639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.080 [2024-11-17 04:50:18.726678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.080 [2024-11-17 04:50:18.726688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.080 [2024-11-17 04:50:18.726696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.080 [2024-11-17 04:50:18.726718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.080 [2024-11-17 04:50:18.728354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:40.080 [2024-11-17 04:50:18.728465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:40.080 [2024-11-17 04:50:18.728571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:40.080 [2024-11-17 04:50:18.728573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Read completed with error (sct=0, sc=8) 00:33:40.080 starting I/O failed 00:33:40.080 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Read completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Read completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Read completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Read completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Write completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 Read completed with error (sct=0, sc=8) 00:33:40.081 starting I/O failed 00:33:40.081 [2024-11-17 04:50:18.757643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:40.081 [2024-11-17 04:50:18.759513] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:40.081 [2024-11-17 04:50:18.759536] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:40.081 [2024-11-17 04:50:18.759545] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.081 Malloc0 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.081 04:50:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.340 [2024-11-17 04:50:18.934672] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfa7640/0xfb32d0) succeed. 00:33:40.340 [2024-11-17 04:50:18.944335] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa8c80/0xff4970) succeed. 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.340 [2024-11-17 04:50:19.088105] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.340 04:50:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 505066 00:33:41.276 [2024-11-17 04:50:19.763786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-11-17 04:50:19.774193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.276 [2024-11-17 04:50:19.774253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.774276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.774292] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.774302] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.784257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.794018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.794064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.794082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.794091] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.794100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.804413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.813967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.814008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.814025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.814034] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.814043] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.824228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.834049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.834093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.834110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.834119] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.834127] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.844304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.854257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.854303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.854320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.854329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.854338] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.864593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.874186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.874226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.874243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.874252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.874261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.884409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.894214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.894257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.894274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.894283] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.894292] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.904517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.914281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.914322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.914339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.914348] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.914356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.924505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.934430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.934471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.934488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.934497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.934505] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.944386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.954423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.954466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.954483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.954496] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.954504] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.964628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.974443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.974486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.974503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.974512] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.974521] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:19.984496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:19.994533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:19.994576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:19.994593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:19.994602] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:19.994610] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:20.004735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-11-17 04:50:20.014654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.277 [2024-11-17 04:50:20.014699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.277 [2024-11-17 04:50:20.014716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.277 [2024-11-17 04:50:20.014725] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.277 [2024-11-17 04:50:20.014734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.277 [2024-11-17 04:50:20.024706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.278 [2024-11-17 04:50:20.034816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.278 [2024-11-17 04:50:20.034860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.278 [2024-11-17 04:50:20.034877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.278 [2024-11-17 04:50:20.034887] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.278 [2024-11-17 04:50:20.034896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.278 [2024-11-17 04:50:20.044953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-11-17 04:50:20.054769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.278 [2024-11-17 04:50:20.054815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.278 [2024-11-17 04:50:20.054832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.278 [2024-11-17 04:50:20.054842] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.278 [2024-11-17 04:50:20.054851] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.278 [2024-11-17 04:50:20.064750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-11-17 04:50:20.074744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.278 [2024-11-17 04:50:20.074787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.278 [2024-11-17 04:50:20.074803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.278 [2024-11-17 04:50:20.074813] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.278 [2024-11-17 04:50:20.074821] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.278 [2024-11-17 04:50:20.085068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-11-17 04:50:20.094811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.278 [2024-11-17 04:50:20.094854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.278 [2024-11-17 04:50:20.094872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.278 [2024-11-17 04:50:20.094881] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.278 [2024-11-17 04:50:20.094889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.278 [2024-11-17 04:50:20.105075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.114959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.115003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.115021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.115030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.115039] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.125093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.134982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.135025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.135042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.135051] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.135060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.145240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.155099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.155139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.155155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.155164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.155173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.165221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.175115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.175155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.175172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.175181] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.175190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.185349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.195260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.195300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.195317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.195326] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.195335] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.205612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.215183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.215219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.215239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.215248] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.215256] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.225318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.235359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.235399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.235417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.235425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.235434] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.245707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.255385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.255429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.255446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.255454] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.255463] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.265460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.275482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.275521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.275538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.275547] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.275556] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.285631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.295435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.295479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.295496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.295508] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.295516] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.305673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.315394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.315436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.315454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.315463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.315471] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.325627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.335680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.335720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.335737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.335746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.538 [2024-11-17 04:50:20.335755] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.538 [2024-11-17 04:50:20.345798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.538 qpair failed and we were unable to recover it. 00:33:41.538 [2024-11-17 04:50:20.355742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.538 [2024-11-17 04:50:20.355780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.538 [2024-11-17 04:50:20.355797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.538 [2024-11-17 04:50:20.355806] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.539 [2024-11-17 04:50:20.355814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.539 [2024-11-17 04:50:20.365926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.539 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.375713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.375751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.375768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.375778] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.375787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.386086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.395719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.395760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.395777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.395786] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.395795] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.406038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.415803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.415846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.415864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.415873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.415882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.426150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.435868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.435913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.435930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.435945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.435954] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.446152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.455939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.455975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.455992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.456002] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.456010] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.465998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.476000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.476043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.476061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.476070] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.476079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.486225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.496099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.496147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.496164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.496173] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.496182] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.506110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.516298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.516336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.516353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.516362] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.516370] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.526284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.536134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.536182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.536200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.536209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.536217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.799 [2024-11-17 04:50:20.546369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.799 qpair failed and we were unable to recover it. 00:33:41.799 [2024-11-17 04:50:20.556242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.799 [2024-11-17 04:50:20.556284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.799 [2024-11-17 04:50:20.556304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.799 [2024-11-17 04:50:20.556313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.799 [2024-11-17 04:50:20.556321] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.800 [2024-11-17 04:50:20.566529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.800 qpair failed and we were unable to recover it. 00:33:41.800 [2024-11-17 04:50:20.576289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.800 [2024-11-17 04:50:20.576332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.800 [2024-11-17 04:50:20.576349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.800 [2024-11-17 04:50:20.576358] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.800 [2024-11-17 04:50:20.576367] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.800 [2024-11-17 04:50:20.586420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.800 qpair failed and we were unable to recover it. 00:33:41.800 [2024-11-17 04:50:20.596320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.800 [2024-11-17 04:50:20.596365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.800 [2024-11-17 04:50:20.596383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.800 [2024-11-17 04:50:20.596392] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.800 [2024-11-17 04:50:20.596400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.800 [2024-11-17 04:50:20.606651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.800 qpair failed and we were unable to recover it. 00:33:41.800 [2024-11-17 04:50:20.616265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.800 [2024-11-17 04:50:20.616310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.800 [2024-11-17 04:50:20.616328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.800 [2024-11-17 04:50:20.616337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.800 [2024-11-17 04:50:20.616346] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:41.800 [2024-11-17 04:50:20.626654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.800 qpair failed and we were unable to recover it. 00:33:42.059 [2024-11-17 04:50:20.636391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.636433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.636450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.636463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.636472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.646566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.656364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.656407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.656424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.656434] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.656443] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.666615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.676442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.676483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.676501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.676510] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.676519] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.686677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.696492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.696532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.696549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.696559] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.696568] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.706730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.716621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.716663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.716680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.716689] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.716698] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.726818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.736596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.736637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.736655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.736664] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.736673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.746776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.756779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.756825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.756842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.756852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.756861] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.766970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.776706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.776748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.776766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.776776] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.776784] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.787099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.796761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.796802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.796819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.796828] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.796837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.807059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.816873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.816912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.816929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.816944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.816952] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.827106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.836985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.837030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.837047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.060 [2024-11-17 04:50:20.837057] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.060 [2024-11-17 04:50:20.837066] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.060 [2024-11-17 04:50:20.847123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.060 qpair failed and we were unable to recover it. 00:33:42.060 [2024-11-17 04:50:20.857053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.060 [2024-11-17 04:50:20.857091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.060 [2024-11-17 04:50:20.857108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.061 [2024-11-17 04:50:20.857117] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.061 [2024-11-17 04:50:20.857126] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.061 [2024-11-17 04:50:20.867337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.061 qpair failed and we were unable to recover it. 00:33:42.061 [2024-11-17 04:50:20.877118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.061 [2024-11-17 04:50:20.877161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.061 [2024-11-17 04:50:20.877178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.061 [2024-11-17 04:50:20.877188] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.061 [2024-11-17 04:50:20.877197] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.061 [2024-11-17 04:50:20.887259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.061 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.897197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.897241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.897261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.897271] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.897280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:20.907431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.917259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.917299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.917316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.917325] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.917334] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:20.927454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.937114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.937159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.937176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.937185] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.937194] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:20.947305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.957312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.957355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.957372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.957382] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.957391] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:20.967497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.977419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.977464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.977481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.977490] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.977506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:20.987696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:20.997436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:20.997475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:20.997492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:20.997502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:20.997510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:21.007742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:21.017493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:21.017531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:21.017548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:21.017558] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:21.017566] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:21.027621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:21.037568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:21.037611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:21.037628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:21.037637] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:21.037646] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:21.047918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-11-17 04:50:21.057605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.321 [2024-11-17 04:50:21.057649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.321 [2024-11-17 04:50:21.057666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.321 [2024-11-17 04:50:21.057675] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.321 [2024-11-17 04:50:21.057684] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.321 [2024-11-17 04:50:21.067648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.322 [2024-11-17 04:50:21.077639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.322 [2024-11-17 04:50:21.077680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.322 [2024-11-17 04:50:21.077698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.322 [2024-11-17 04:50:21.077707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.322 [2024-11-17 04:50:21.077715] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.322 [2024-11-17 04:50:21.087782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-11-17 04:50:21.097685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.322 [2024-11-17 04:50:21.097724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.322 [2024-11-17 04:50:21.097740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.322 [2024-11-17 04:50:21.097750] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.322 [2024-11-17 04:50:21.097758] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.322 [2024-11-17 04:50:21.108053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-11-17 04:50:21.117811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.322 [2024-11-17 04:50:21.117852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.322 [2024-11-17 04:50:21.117870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.322 [2024-11-17 04:50:21.117879] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.322 [2024-11-17 04:50:21.117888] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.322 [2024-11-17 04:50:21.127991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-11-17 04:50:21.137936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.322 [2024-11-17 04:50:21.137980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.322 [2024-11-17 04:50:21.137997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.322 [2024-11-17 04:50:21.138006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.322 [2024-11-17 04:50:21.138015] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.322 [2024-11-17 04:50:21.148084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.582 [2024-11-17 04:50:21.157893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.582 [2024-11-17 04:50:21.157950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.582 [2024-11-17 04:50:21.157968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.582 [2024-11-17 04:50:21.157977] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.582 [2024-11-17 04:50:21.157986] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.582 [2024-11-17 04:50:21.168242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-11-17 04:50:21.177916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.582 [2024-11-17 04:50:21.177962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.582 [2024-11-17 04:50:21.177979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.582 [2024-11-17 04:50:21.177988] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.582 [2024-11-17 04:50:21.177997] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.582 [2024-11-17 04:50:21.188121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-11-17 04:50:21.198069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.582 [2024-11-17 04:50:21.198111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.582 [2024-11-17 04:50:21.198128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.582 [2024-11-17 04:50:21.198138] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.582 [2024-11-17 04:50:21.198146] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.582 [2024-11-17 04:50:21.208226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-11-17 04:50:21.218089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.582 [2024-11-17 04:50:21.218133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.582 [2024-11-17 04:50:21.218150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.582 [2024-11-17 04:50:21.218159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.582 [2024-11-17 04:50:21.218168] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.582 [2024-11-17 04:50:21.228248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-11-17 04:50:21.238140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.582 [2024-11-17 04:50:21.238179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.582 [2024-11-17 04:50:21.238199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.582 [2024-11-17 04:50:21.238209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.238217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.248358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.258201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.258242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.258260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.258269] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.258277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.268294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.278299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.278338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.278356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.278365] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.278374] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.288526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.298299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.298340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.298357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.298367] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.298375] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.308582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.318433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.318480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.318497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.318506] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.318518] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.328598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.338440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.338481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.338499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.338508] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.338516] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.348713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.358474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.358515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.358533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.358542] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.358550] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.368514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.378565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.378614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.378631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.378641] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.378649] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.388912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-11-17 04:50:21.398673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.583 [2024-11-17 04:50:21.398712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.583 [2024-11-17 04:50:21.398730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.583 [2024-11-17 04:50:21.398740] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.583 [2024-11-17 04:50:21.398748] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.583 [2024-11-17 04:50:21.409032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.418720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.418763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.418780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.418790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.418799] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.428809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.438680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.438722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.438738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.438748] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.438756] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.448992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.458818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.458866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.458883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.458892] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.458901] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.469244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.478922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.478969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.478987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.478996] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.479005] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.489177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.498943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.498986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.499004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.499013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.499021] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.509342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.518962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.519004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.519021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.519030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.843 [2024-11-17 04:50:21.519039] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.843 [2024-11-17 04:50:21.529344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.843 qpair failed and we were unable to recover it. 00:33:42.843 [2024-11-17 04:50:21.539085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.843 [2024-11-17 04:50:21.539124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.843 [2024-11-17 04:50:21.539141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.843 [2024-11-17 04:50:21.539151] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.539159] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.549478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.559202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.559247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.559264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.559273] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.559281] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.569545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.579244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.579282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.579299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.579311] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.579319] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.589551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.599275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.599317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.599334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.599343] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.599352] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.609552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.619295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.619342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.619359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.619368] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.619376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.629584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.639410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.639451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.639468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.639477] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.639486] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.649781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:42.844 [2024-11-17 04:50:21.659337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.844 [2024-11-17 04:50:21.659381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.844 [2024-11-17 04:50:21.659398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.844 [2024-11-17 04:50:21.659407] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.844 [2024-11-17 04:50:21.659419] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:42.844 [2024-11-17 04:50:21.669825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.844 qpair failed and we were unable to recover it. 00:33:43.104 [2024-11-17 04:50:21.679554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.104 [2024-11-17 04:50:21.679597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.104 [2024-11-17 04:50:21.679614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.104 [2024-11-17 04:50:21.679624] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.104 [2024-11-17 04:50:21.679632] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.104 [2024-11-17 04:50:21.689796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.104 qpair failed and we were unable to recover it. 00:33:43.104 [2024-11-17 04:50:21.699601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.104 [2024-11-17 04:50:21.699641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.104 [2024-11-17 04:50:21.699657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.104 [2024-11-17 04:50:21.699667] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.104 [2024-11-17 04:50:21.699675] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.104 [2024-11-17 04:50:21.709962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.104 qpair failed and we were unable to recover it. 00:33:43.104 [2024-11-17 04:50:21.719575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.104 [2024-11-17 04:50:21.719617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.104 [2024-11-17 04:50:21.719634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.104 [2024-11-17 04:50:21.719643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.104 [2024-11-17 04:50:21.719652] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.104 [2024-11-17 04:50:21.730006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.104 qpair failed and we were unable to recover it. 00:33:43.104 [2024-11-17 04:50:21.739649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.104 [2024-11-17 04:50:21.739687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.104 [2024-11-17 04:50:21.739704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.739713] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.739722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.749862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.759674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.759716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.759733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.759742] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.759751] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.770005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.779669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.779709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.779726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.779736] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.779744] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.789910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.799817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.799858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.799878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.799888] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.799898] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.810154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.819852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.819894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.819913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.819922] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.819939] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.830154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.839958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.839999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.840019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.840028] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.840037] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.850150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.859990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.860037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.860054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.860064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.860073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.870152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.880107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.880149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.880166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.880176] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.880185] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.890331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.900353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.900394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.900412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.900421] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.900430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.910348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.105 [2024-11-17 04:50:21.920178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.105 [2024-11-17 04:50:21.920219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.105 [2024-11-17 04:50:21.920236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.105 [2024-11-17 04:50:21.920249] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.105 [2024-11-17 04:50:21.920258] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.105 [2024-11-17 04:50:21.930430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.105 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:21.940325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:21.940374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:21.940391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:21.940400] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:21.940409] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:21.950607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:21.960347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:21.960388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:21.960405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:21.960415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:21.960423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:21.970629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:21.980432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:21.980469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:21.980485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:21.980495] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:21.980504] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:21.990722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.000567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.000607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.000624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.000634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.000643] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.010778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.020564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.020606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.020623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.020633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.020641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.030767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.040598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.040639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.040655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.040665] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.040673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.051028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.060638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.060684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.060701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.060711] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.060720] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.070935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.080712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.080758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.080775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.080784] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.080793] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.090946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.100786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.100829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.100846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.100856] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.100865] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.110864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.120799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.120841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.120859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.120868] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.120876] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.130908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.140843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.366 [2024-11-17 04:50:22.140882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.366 [2024-11-17 04:50:22.140899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.366 [2024-11-17 04:50:22.140909] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.366 [2024-11-17 04:50:22.140917] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.366 [2024-11-17 04:50:22.150995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-11-17 04:50:22.160937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.367 [2024-11-17 04:50:22.160980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.367 [2024-11-17 04:50:22.160997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.367 [2024-11-17 04:50:22.161006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.367 [2024-11-17 04:50:22.161015] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.367 [2024-11-17 04:50:22.171011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-11-17 04:50:22.180880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.367 [2024-11-17 04:50:22.180928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.367 [2024-11-17 04:50:22.180953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.367 [2024-11-17 04:50:22.180963] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.367 [2024-11-17 04:50:22.180971] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.367 [2024-11-17 04:50:22.191191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.201018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.201065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.201082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.201091] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.201101] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.211287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.221077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.221122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.221139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.221148] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.221157] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.231324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.241196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.241238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.241255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.241264] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.241273] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.251477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.261284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.261332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.261349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.261362] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.261371] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.271573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.281304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.281346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.281364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.281373] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.281382] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.291461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.301355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.301395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.301413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.301423] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.301431] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.311541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.321352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.321395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.321412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.321421] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.321430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.331639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.341439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.341479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.341497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.341506] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.341515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.351794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.361481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.361522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.361539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.361548] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.361557] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.371774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.381556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.381598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.381615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.381624] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.381633] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.391860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.401523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.401565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.401582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.401591] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.401600] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.627 [2024-11-17 04:50:22.411649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.627 qpair failed and we were unable to recover it. 00:33:43.627 [2024-11-17 04:50:22.421544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.627 [2024-11-17 04:50:22.421590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.627 [2024-11-17 04:50:22.421607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.627 [2024-11-17 04:50:22.421617] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.627 [2024-11-17 04:50:22.421626] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.628 [2024-11-17 04:50:22.431939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.628 qpair failed and we were unable to recover it. 00:33:43.628 [2024-11-17 04:50:22.441700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.628 [2024-11-17 04:50:22.441744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.628 [2024-11-17 04:50:22.441761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.628 [2024-11-17 04:50:22.441771] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.628 [2024-11-17 04:50:22.441779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.628 [2024-11-17 04:50:22.451867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.628 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.461773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.461812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.461829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.461838] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.888 [2024-11-17 04:50:22.461848] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.888 [2024-11-17 04:50:22.471962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.888 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.481646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.481691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.481708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.481717] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.888 [2024-11-17 04:50:22.481726] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.888 [2024-11-17 04:50:22.491866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.888 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.501763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.501805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.501822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.501831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.888 [2024-11-17 04:50:22.501840] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.888 [2024-11-17 04:50:22.512008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.888 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.521840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.521877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.521897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.521907] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.888 [2024-11-17 04:50:22.521915] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.888 [2024-11-17 04:50:22.532240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.888 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.541957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.541995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.542012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.542022] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.888 [2024-11-17 04:50:22.542030] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.888 [2024-11-17 04:50:22.552084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.888 qpair failed and we were unable to recover it. 00:33:43.888 [2024-11-17 04:50:22.561983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.888 [2024-11-17 04:50:22.562025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.888 [2024-11-17 04:50:22.562042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.888 [2024-11-17 04:50:22.562052] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.562060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.572216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.582089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.582133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.582150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.582160] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.582168] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.592250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.602049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.602094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.602111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.602120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.602132] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.612300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.622071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.622111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.622128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.622138] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.622146] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.632331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.642107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.642149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.642166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.642175] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.642184] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.652465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.662234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.662278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.662296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.662305] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.662314] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.672558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.682357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.682401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.682418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.682427] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.682437] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.692448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:43.889 [2024-11-17 04:50:22.702352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.889 [2024-11-17 04:50:22.702389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.889 [2024-11-17 04:50:22.702406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.889 [2024-11-17 04:50:22.702415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.889 [2024-11-17 04:50:22.702424] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:43.889 [2024-11-17 04:50:22.712687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.889 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.722434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.722475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.722493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.722502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.722511] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.732587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.742539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.742580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.742596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.742606] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.742614] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.752667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.762497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.762535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.762551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.762561] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.762570] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.772642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.782623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.782665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.782682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.782692] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.782700] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.792821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.802683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.802725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.802742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.802751] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.802760] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.813006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.822683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.822726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.822743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.822753] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.822761] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.832867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.842799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.842839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.842855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.842864] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.842873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.853017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.862875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.862913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.862944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.862954] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.862963] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.873091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.882926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.882970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.882987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.882996] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.883005] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.892972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.902952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.902993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.903010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.903020] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.903029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.913159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.149 [2024-11-17 04:50:22.923003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.149 [2024-11-17 04:50:22.923046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.149 [2024-11-17 04:50:22.923063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.149 [2024-11-17 04:50:22.923073] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.149 [2024-11-17 04:50:22.923081] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.149 [2024-11-17 04:50:22.933301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.149 qpair failed and we were unable to recover it. 00:33:44.150 [2024-11-17 04:50:22.943017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.150 [2024-11-17 04:50:22.943056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.150 [2024-11-17 04:50:22.943073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.150 [2024-11-17 04:50:22.943083] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.150 [2024-11-17 04:50:22.943095] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.150 [2024-11-17 04:50:22.953275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.150 qpair failed and we were unable to recover it. 00:33:44.150 [2024-11-17 04:50:22.963122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.150 [2024-11-17 04:50:22.963164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.150 [2024-11-17 04:50:22.963181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.150 [2024-11-17 04:50:22.963190] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.150 [2024-11-17 04:50:22.963199] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.150 [2024-11-17 04:50:22.973359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.150 qpair failed and we were unable to recover it. 00:33:44.409 [2024-11-17 04:50:22.983204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.409 [2024-11-17 04:50:22.983247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.409 [2024-11-17 04:50:22.983265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.409 [2024-11-17 04:50:22.983275] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.409 [2024-11-17 04:50:22.983284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.409 [2024-11-17 04:50:22.993722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.409 qpair failed and we were unable to recover it. 00:33:44.409 [2024-11-17 04:50:23.003236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.409 [2024-11-17 04:50:23.003280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.409 [2024-11-17 04:50:23.003297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.409 [2024-11-17 04:50:23.003307] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.409 [2024-11-17 04:50:23.003315] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.409 [2024-11-17 04:50:23.013542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.409 qpair failed and we were unable to recover it. 00:33:44.409 [2024-11-17 04:50:23.023238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.409 [2024-11-17 04:50:23.023277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.409 [2024-11-17 04:50:23.023294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.409 [2024-11-17 04:50:23.023303] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.409 [2024-11-17 04:50:23.023312] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.409 [2024-11-17 04:50:23.033533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.409 qpair failed and we were unable to recover it. 00:33:44.409 [2024-11-17 04:50:23.043343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.409 [2024-11-17 04:50:23.043384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.409 [2024-11-17 04:50:23.043402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.409 [2024-11-17 04:50:23.043411] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.409 [2024-11-17 04:50:23.043420] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.409 [2024-11-17 04:50:23.053662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.409 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.063464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.063505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.063522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.063532] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.063541] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.073706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.083536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.083578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.083595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.083605] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.083613] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.093739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.103592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.103629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.103646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.103655] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.103664] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.113787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.123606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.123651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.123668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.123678] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.123686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.133856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.143682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.143729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.143746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.143756] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.143764] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.154005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.163765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.163804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.163821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.163831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.163839] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.174121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.183780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.183822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.183839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.183849] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.183858] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.193901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.203837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.203877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.203894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.203907] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.203915] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.214056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.410 [2024-11-17 04:50:23.223917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.410 [2024-11-17 04:50:23.223963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.410 [2024-11-17 04:50:23.223980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.410 [2024-11-17 04:50:23.223990] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.410 [2024-11-17 04:50:23.223999] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.410 [2024-11-17 04:50:23.234340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.410 qpair failed and we were unable to recover it. 00:33:44.670 [2024-11-17 04:50:23.244058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.670 [2024-11-17 04:50:23.244102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.670 [2024-11-17 04:50:23.244120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.670 [2024-11-17 04:50:23.244130] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.670 [2024-11-17 04:50:23.244139] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.670 [2024-11-17 04:50:23.254038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.670 qpair failed and we were unable to recover it. 00:33:44.670 [2024-11-17 04:50:23.263981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.670 [2024-11-17 04:50:23.264025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.670 [2024-11-17 04:50:23.264043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.670 [2024-11-17 04:50:23.264053] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.670 [2024-11-17 04:50:23.264061] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.670 [2024-11-17 04:50:23.274285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.670 qpair failed and we were unable to recover it. 00:33:44.670 [2024-11-17 04:50:23.284086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.284127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.284144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.284153] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.284165] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.294257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.304131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.304178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.304196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.304206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.304214] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.314397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.324185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.324223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.324241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.324250] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.324258] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.334577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.344306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.344346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.344363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.344373] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.344382] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.354474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.364437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.364481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.364509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.364520] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.364529] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.374644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.384367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.384407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.384425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.384434] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.384443] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.394586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.404485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.404528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.404545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.404555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.404564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.414765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.424487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.424529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.424546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.424555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.424564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.434701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.444573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.444614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.444632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.444642] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.444650] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.454844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.464644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.464684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.464704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.464714] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.464722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.474910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.671 [2024-11-17 04:50:23.484746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.671 [2024-11-17 04:50:23.484783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.671 [2024-11-17 04:50:23.484800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.671 [2024-11-17 04:50:23.484810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.671 [2024-11-17 04:50:23.484819] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.671 [2024-11-17 04:50:23.494887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.671 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.504754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.931 [2024-11-17 04:50:23.504791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.931 [2024-11-17 04:50:23.504809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.931 [2024-11-17 04:50:23.504818] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.931 [2024-11-17 04:50:23.504827] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.931 [2024-11-17 04:50:23.514953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.931 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.524805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.931 [2024-11-17 04:50:23.524849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.931 [2024-11-17 04:50:23.524866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.931 [2024-11-17 04:50:23.524876] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.931 [2024-11-17 04:50:23.524884] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.931 [2024-11-17 04:50:23.534957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.931 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.544832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.931 [2024-11-17 04:50:23.544878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.931 [2024-11-17 04:50:23.544895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.931 [2024-11-17 04:50:23.544909] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.931 [2024-11-17 04:50:23.544917] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.931 [2024-11-17 04:50:23.555088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.931 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.564997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.931 [2024-11-17 04:50:23.565038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.931 [2024-11-17 04:50:23.565055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.931 [2024-11-17 04:50:23.565065] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.931 [2024-11-17 04:50:23.565073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.931 [2024-11-17 04:50:23.575275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.931 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.585021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.931 [2024-11-17 04:50:23.585065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.931 [2024-11-17 04:50:23.585082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.931 [2024-11-17 04:50:23.585092] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.931 [2024-11-17 04:50:23.585100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.931 [2024-11-17 04:50:23.595171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.931 qpair failed and we were unable to recover it. 00:33:44.931 [2024-11-17 04:50:23.604988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.605031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.605048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.605057] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.605066] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.615322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.625065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.625110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.625127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.625137] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.625145] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.635368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.645103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.645151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.645168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.645177] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.645186] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.655483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.665327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.665363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.665381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.665390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.665399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.675300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.685411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.685456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.685473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.685482] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.685491] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.695562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.705461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.705513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.705530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.705540] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.705549] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.715568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.725451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.725495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.725512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.725522] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.725530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.735630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:44.932 [2024-11-17 04:50:23.745497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.932 [2024-11-17 04:50:23.745536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.932 [2024-11-17 04:50:23.745553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.932 [2024-11-17 04:50:23.745562] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.932 [2024-11-17 04:50:23.745571] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:44.932 [2024-11-17 04:50:23.755534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.932 qpair failed and we were unable to recover it. 00:33:45.192 [2024-11-17 04:50:23.765590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.192 [2024-11-17 04:50:23.765633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.192 [2024-11-17 04:50:23.765652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.192 [2024-11-17 04:50:23.765661] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.192 [2024-11-17 04:50:23.765670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.192 [2024-11-17 04:50:23.775605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.192 qpair failed and we were unable to recover it. 00:33:45.192 [2024-11-17 04:50:23.785545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.192 [2024-11-17 04:50:23.785587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.192 [2024-11-17 04:50:23.785605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.192 [2024-11-17 04:50:23.785614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.192 [2024-11-17 04:50:23.785623] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.192 [2024-11-17 04:50:23.795832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.192 qpair failed and we were unable to recover it. 00:33:45.192 [2024-11-17 04:50:23.805673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.192 [2024-11-17 04:50:23.805715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.192 [2024-11-17 04:50:23.805736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.805745] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.805753] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.815788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.825660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.825701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.825718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.825728] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.825736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.835861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.845894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.845939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.845956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.845966] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.845975] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.856004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.865891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.865943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.865961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.865971] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.865980] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.876105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.886081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.886129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.886147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.886162] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.886171] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.896149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.906089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.906127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.906144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.906153] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.906162] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.916385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.926193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.926234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.926251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.926260] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.926268] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.936432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.946250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.946294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.946311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.946320] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.946328] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.956563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.966293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.966334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.966352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.966361] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.966371] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.976637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:23.986340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:23.986382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:23.986399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:23.986409] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:23.986417] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:23.996684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.193 [2024-11-17 04:50:24.006345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.193 [2024-11-17 04:50:24.006388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.193 [2024-11-17 04:50:24.006405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.193 [2024-11-17 04:50:24.006414] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.193 [2024-11-17 04:50:24.006423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.193 [2024-11-17 04:50:24.016687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.193 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.026316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.026363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.026381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.026390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.026399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.036703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.046400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.046441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.046458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.046468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.046476] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.056718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.066466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.066508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.066525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.066535] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.066544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.076874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.086549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.086593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.086611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.086621] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.086630] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.096856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.106605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.106650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.106667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.106677] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.106686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.116894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.126595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.126634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.126651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.126661] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.126670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.136941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.146622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.146669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.146691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.146700] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.146709] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.156841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.166724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.166767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.166785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.166794] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.166803] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.177048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.186741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.186785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.186801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.186811] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.186819] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.196989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.206780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.206825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.206842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.206852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.206861] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.217175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.226904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.226950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.454 [2024-11-17 04:50:24.226967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.454 [2024-11-17 04:50:24.226977] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.454 [2024-11-17 04:50:24.226989] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.454 [2024-11-17 04:50:24.237205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.454 qpair failed and we were unable to recover it. 00:33:45.454 [2024-11-17 04:50:24.247106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.454 [2024-11-17 04:50:24.247150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.455 [2024-11-17 04:50:24.247167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.455 [2024-11-17 04:50:24.247176] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.455 [2024-11-17 04:50:24.247186] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.455 [2024-11-17 04:50:24.257269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.455 qpair failed and we were unable to recover it. 00:33:45.455 [2024-11-17 04:50:24.267063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.455 [2024-11-17 04:50:24.267108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.455 [2024-11-17 04:50:24.267125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.455 [2024-11-17 04:50:24.267135] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.455 [2024-11-17 04:50:24.267143] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.455 [2024-11-17 04:50:24.277309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.455 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.287046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.287086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.287103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.287113] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.287122] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.297565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.307134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.307173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.307190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.307199] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.307208] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.317607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.327275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.327316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.327333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.327343] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.327351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.337579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.347365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.347409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.347426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.347436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.347444] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.357615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.367350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.367390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.367407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.367416] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.367425] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.377833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.387463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.387502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.387518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.387527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.387536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.397614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.407426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.407471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.407488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.407497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.407506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.417770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.427486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.427530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.427546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.427556] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.427565] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.437721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.447572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.447618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.447635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.447645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.447653] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.457827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.467617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.467656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.467673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.467682] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.467690] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.477991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.487691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.487734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.487753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.487762] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.487771] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.498099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.507690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.715 [2024-11-17 04:50:24.507733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.715 [2024-11-17 04:50:24.507749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.715 [2024-11-17 04:50:24.507759] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.715 [2024-11-17 04:50:24.507767] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.715 [2024-11-17 04:50:24.518079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.715 qpair failed and we were unable to recover it. 00:33:45.715 [2024-11-17 04:50:24.527714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.716 [2024-11-17 04:50:24.527752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.716 [2024-11-17 04:50:24.527769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.716 [2024-11-17 04:50:24.527778] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.716 [2024-11-17 04:50:24.527787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.716 [2024-11-17 04:50:24.538075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.716 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.547807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.547843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.547861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.547870] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.547879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.558179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.567873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.567911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.567928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.567942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.567954] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.578173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.587949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.587995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.588012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.588021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.588030] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.598360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.607938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.607981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.607999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.608008] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.608017] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.618242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.628183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.628226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.628243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.628252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.628261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.638439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.648156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.648198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.648214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.648224] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.648232] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.658457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.668073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.668118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.668135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.668144] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.668153] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.678589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.688298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.688340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.688357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.688366] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.688375] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.698420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.708295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.708333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.708350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.708359] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.708368] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.718491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.728331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.728373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.728390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.976 [2024-11-17 04:50:24.728399] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.976 [2024-11-17 04:50:24.728407] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.976 [2024-11-17 04:50:24.738693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.976 qpair failed and we were unable to recover it. 00:33:45.976 [2024-11-17 04:50:24.748392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.976 [2024-11-17 04:50:24.748441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.976 [2024-11-17 04:50:24.748458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.977 [2024-11-17 04:50:24.748467] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.977 [2024-11-17 04:50:24.748476] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.977 [2024-11-17 04:50:24.758903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.977 qpair failed and we were unable to recover it. 00:33:45.977 [2024-11-17 04:50:24.768577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.977 [2024-11-17 04:50:24.768617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.977 [2024-11-17 04:50:24.768633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.977 [2024-11-17 04:50:24.768643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.977 [2024-11-17 04:50:24.768651] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.977 [2024-11-17 04:50:24.778806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.977 qpair failed and we were unable to recover it. 00:33:45.977 [2024-11-17 04:50:24.788556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.977 [2024-11-17 04:50:24.788597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.977 [2024-11-17 04:50:24.788613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.977 [2024-11-17 04:50:24.788623] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.977 [2024-11-17 04:50:24.788632] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:45.977 [2024-11-17 04:50:24.798846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.977 qpair failed and we were unable to recover it. 00:33:46.236 [2024-11-17 04:50:24.808628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.236 [2024-11-17 04:50:24.808671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.236 [2024-11-17 04:50:24.808691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.236 [2024-11-17 04:50:24.808701] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.236 [2024-11-17 04:50:24.808710] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:33:46.236 [2024-11-17 04:50:24.818871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.236 qpair failed and we were unable to recover it. 00:33:47.173 Write completed with error (sct=0, sc=8) 00:33:47.173 starting I/O failed 00:33:47.173 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Read completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 Write completed with error (sct=0, sc=8) 00:33:47.174 starting I/O failed 00:33:47.174 [2024-11-17 04:50:25.823906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:47.174 [2024-11-17 04:50:25.823964] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:47.174 A controller has encountered a failure and is being reset. 00:33:47.174 [2024-11-17 04:50:25.824103] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:47.174 [2024-11-17 04:50:25.825919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:33:47.174 Controller properly reset. 00:33:48.110 Read completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Write completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Write completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Read completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Write completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Write completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.110 Write completed with error (sct=0, sc=8) 00:33:48.110 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Read completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 Write completed with error (sct=0, sc=8) 00:33:48.111 starting I/O failed 00:33:48.111 [2024-11-17 04:50:26.847374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:48.111 Initializing NVMe Controllers 00:33:48.111 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.111 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.111 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:48.111 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:48.111 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:48.111 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:48.111 Initialization complete. Launching workers. 00:33:48.111 Starting thread on core 1 00:33:48.111 Starting thread on core 2 00:33:48.111 Starting thread on core 3 00:33:48.111 Starting thread on core 0 00:33:48.111 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:48.111 00:33:48.111 real 0m12.923s 00:33:48.111 user 0m24.349s 00:33:48.111 sys 0m3.315s 00:33:48.111 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.111 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.111 ************************************ 00:33:48.111 END TEST nvmf_target_disconnect_tc2 00:33:48.111 ************************************ 00:33:48.370 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:33:48.370 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:33:48.370 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:48.370 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.370 04:50:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:48.370 ************************************ 00:33:48.370 START TEST nvmf_target_disconnect_tc3 00:33:48.370 ************************************ 00:33:48.370 04:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:33:48.370 04:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=506994 00:33:48.370 04:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:33:48.370 04:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:33:50.272 04:50:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 505617 00:33:50.272 04:50:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Read completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 Write completed with error (sct=0, sc=8) 00:33:51.651 starting I/O failed 00:33:51.651 [2024-11-17 04:50:30.211872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.219 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 505617 Killed "${NVMF_APP[@]}" "$@" 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=507777 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 507777 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 507777 ']' 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.219 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.220 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.220 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.220 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.479 [2024-11-17 04:50:31.071530] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:52.479 [2024-11-17 04:50:31.071586] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.479 [2024-11-17 04:50:31.166692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.479 [2024-11-17 04:50:31.187999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.479 [2024-11-17 04:50:31.188042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.479 [2024-11-17 04:50:31.188051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.479 [2024-11-17 04:50:31.188060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.479 [2024-11-17 04:50:31.188067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.479 [2024-11-17 04:50:31.189669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:52.479 [2024-11-17 04:50:31.189784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:52.479 [2024-11-17 04:50:31.189891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:52.479 [2024-11-17 04:50:31.189893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:52.479 Read completed with error (sct=0, sc=8) 00:33:52.479 starting I/O failed 00:33:52.479 Write completed with error (sct=0, sc=8) 00:33:52.479 starting I/O failed 00:33:52.479 Write completed with error (sct=0, sc=8) 00:33:52.479 starting I/O failed 00:33:52.479 Write completed with error (sct=0, sc=8) 00:33:52.479 starting I/O failed 00:33:52.479 Write completed with error (sct=0, sc=8) 00:33:52.479 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Read completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 Write completed with error (sct=0, sc=8) 00:33:52.480 starting I/O failed 00:33:52.480 [2024-11-17 04:50:31.217021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.480 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.480 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:33:52.480 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:52.480 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.480 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.757 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.757 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.758 Malloc0 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.758 [2024-11-17 04:50:31.401451] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x232d640/0x23392d0) succeed. 00:33:52.758 [2024-11-17 04:50:31.411205] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232ec80/0x237a970) succeed. 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.758 [2024-11-17 04:50:31.559506] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:33:52.758 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.759 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:52.759 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.759 04:50:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 506994 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Write completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Write completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Write completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.701 Read completed with error (sct=0, sc=8) 00:33:53.701 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Read completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 Write completed with error (sct=0, sc=8) 00:33:53.702 starting I/O failed 00:33:53.702 [2024-11-17 04:50:32.222003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:53.702 [2024-11-17 04:50:32.223704] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:53.702 [2024-11-17 04:50:32.223726] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:53.702 [2024-11-17 04:50:32.223735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:54.640 [2024-11-17 04:50:33.227556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:54.640 qpair failed and we were unable to recover it. 00:33:54.640 [2024-11-17 04:50:33.228922] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:54.640 [2024-11-17 04:50:33.228944] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:54.640 [2024-11-17 04:50:33.228953] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:55.578 [2024-11-17 04:50:34.232846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.578 qpair failed and we were unable to recover it. 00:33:55.578 [2024-11-17 04:50:34.234326] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:55.578 [2024-11-17 04:50:34.234345] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:55.578 [2024-11-17 04:50:34.234353] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:56.552 [2024-11-17 04:50:35.238091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.552 qpair failed and we were unable to recover it. 00:33:56.552 [2024-11-17 04:50:35.239545] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:56.552 [2024-11-17 04:50:35.239564] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:56.552 [2024-11-17 04:50:35.239575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:57.490 [2024-11-17 04:50:36.243570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.490 qpair failed and we were unable to recover it. 00:33:57.490 [2024-11-17 04:50:36.245078] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:57.490 [2024-11-17 04:50:36.245105] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:57.490 [2024-11-17 04:50:36.245113] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:58.427 [2024-11-17 04:50:37.248892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.427 qpair failed and we were unable to recover it. 00:33:58.427 [2024-11-17 04:50:37.250466] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:58.427 [2024-11-17 04:50:37.250485] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:58.427 [2024-11-17 04:50:37.250493] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:33:59.807 [2024-11-17 04:50:38.254406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.807 qpair failed and we were unable to recover it. 00:33:59.807 [2024-11-17 04:50:38.255882] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:59.807 [2024-11-17 04:50:38.255901] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:59.807 [2024-11-17 04:50:38.255909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:34:00.744 [2024-11-17 04:50:39.259836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.744 qpair failed and we were unable to recover it. 00:34:00.744 [2024-11-17 04:50:39.262133] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:00.744 [2024-11-17 04:50:39.262194] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:00.744 [2024-11-17 04:50:39.262224] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:34:01.680 [2024-11-17 04:50:40.266184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:34:01.680 qpair failed and we were unable to recover it. 00:34:01.680 [2024-11-17 04:50:40.267604] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:01.680 [2024-11-17 04:50:40.267624] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:01.680 [2024-11-17 04:50:40.267632] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:34:02.618 [2024-11-17 04:50:41.271553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:34:02.619 qpair failed and we were unable to recover it. 00:34:02.619 [2024-11-17 04:50:41.271658] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:34:02.619 A controller has encountered a failure and is being reset. 00:34:02.619 Resorting to new failover address 192.168.100.9 00:34:02.619 [2024-11-17 04:50:41.271752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.619 [2024-11-17 04:50:41.271829] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:02.619 [2024-11-17 04:50:41.273855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:34:02.619 Controller properly reset. 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Write completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 Read completed with error (sct=0, sc=8) 00:34:03.557 starting I/O failed 00:34:03.557 [2024-11-17 04:50:42.320970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:03.557 Initializing NVMe Controllers 00:34:03.557 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:03.557 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:03.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:03.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:03.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:03.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:03.557 Initialization complete. Launching workers. 00:34:03.557 Starting thread on core 1 00:34:03.557 Starting thread on core 2 00:34:03.557 Starting thread on core 3 00:34:03.557 Starting thread on core 0 00:34:03.557 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:34:03.557 00:34:03.557 real 0m15.382s 00:34:03.557 user 1m0.393s 00:34:03.557 sys 0m4.497s 00:34:03.557 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.557 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:03.557 ************************************ 00:34:03.557 END TEST nvmf_target_disconnect_tc3 00:34:03.557 ************************************ 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:03.817 rmmod nvme_rdma 00:34:03.817 rmmod nvme_fabrics 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 507777 ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 507777 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 507777 ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 507777 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 507777 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 507777' 00:34:03.817 killing process with pid 507777 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 507777 00:34:03.817 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 507777 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:04.077 00:34:04.077 real 0m37.559s 00:34:04.077 user 2m13.800s 00:34:04.077 sys 0m14.123s 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:04.077 ************************************ 00:34:04.077 END TEST nvmf_target_disconnect 00:34:04.077 ************************************ 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:04.077 00:34:04.077 real 7m24.952s 00:34:04.077 user 20m38.108s 00:34:04.077 sys 1m47.808s 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.077 04:50:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.077 ************************************ 00:34:04.077 END TEST nvmf_host 00:34:04.077 ************************************ 00:34:04.077 04:50:42 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:34:04.077 00:34:04.077 real 27m34.803s 00:34:04.077 user 79m21.073s 00:34:04.077 sys 6m53.519s 00:34:04.077 04:50:42 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.077 04:50:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:04.077 ************************************ 00:34:04.077 END TEST nvmf_rdma 00:34:04.077 ************************************ 00:34:04.337 04:50:42 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:34:04.337 04:50:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:04.337 04:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.337 04:50:42 -- common/autotest_common.sh@10 -- # set +x 00:34:04.337 ************************************ 00:34:04.337 START TEST spdkcli_nvmf_rdma 00:34:04.337 ************************************ 00:34:04.337 04:50:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:34:04.337 * Looking for test storage... 00:34:04.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.337 --rc genhtml_branch_coverage=1 00:34:04.337 --rc genhtml_function_coverage=1 00:34:04.337 --rc genhtml_legend=1 00:34:04.337 --rc geninfo_all_blocks=1 00:34:04.337 --rc geninfo_unexecuted_blocks=1 00:34:04.337 00:34:04.337 ' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.337 --rc genhtml_branch_coverage=1 00:34:04.337 --rc genhtml_function_coverage=1 00:34:04.337 --rc genhtml_legend=1 00:34:04.337 --rc geninfo_all_blocks=1 00:34:04.337 --rc geninfo_unexecuted_blocks=1 00:34:04.337 00:34:04.337 ' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.337 --rc genhtml_branch_coverage=1 00:34:04.337 --rc genhtml_function_coverage=1 00:34:04.337 --rc genhtml_legend=1 00:34:04.337 --rc geninfo_all_blocks=1 00:34:04.337 --rc geninfo_unexecuted_blocks=1 00:34:04.337 00:34:04.337 ' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.337 --rc genhtml_branch_coverage=1 00:34:04.337 --rc genhtml_function_coverage=1 00:34:04.337 --rc genhtml_legend=1 00:34:04.337 --rc geninfo_all_blocks=1 00:34:04.337 --rc geninfo_unexecuted_blocks=1 00:34:04.337 00:34:04.337 ' 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.337 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.598 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=509770 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 509770 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 509770 ']' 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.598 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:04.598 [2024-11-17 04:50:43.245574] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:04.598 [2024-11-17 04:50:43.245629] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509770 ] 00:34:04.598 [2024-11-17 04:50:43.334242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:04.598 [2024-11-17 04:50:43.358292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.598 [2024-11-17 04:50:43.358294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.858 04:50:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:11.718 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:11.718 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:11.718 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.718 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:11.719 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:11.719 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:11.719 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:11.719 altname enp217s0f0np0 00:34:11.719 altname ens818f0np0 00:34:11.719 inet 192.168.100.8/24 scope global mlx_0_0 00:34:11.719 valid_lft forever preferred_lft forever 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:11.719 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:11.719 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:11.719 altname enp217s0f1np1 00:34:11.719 altname ens818f1np1 00:34:11.719 inet 192.168.100.9/24 scope global mlx_0_1 00:34:11.719 valid_lft forever preferred_lft forever 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:11.719 192.168.100.9' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:11.719 192.168.100.9' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:11.719 192.168.100.9' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.719 04:50:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:12.110 04:50:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:12.110 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:12.110 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:12.110 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:12.110 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:12.110 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:12.110 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:12.111 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:12.111 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:12.111 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:12.111 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:12.111 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:12.111 ' 00:34:14.697 [2024-11-17 04:50:53.298578] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf53840/0xe3dec0) succeed. 00:34:14.697 [2024-11-17 04:50:53.307935] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf4ea00/0xe7f560) succeed. 00:34:16.076 [2024-11-17 04:50:54.702177] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:34:18.612 [2024-11-17 04:50:57.190022] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:34:21.152 [2024-11-17 04:50:59.361133] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:22.529 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:22.529 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:22.529 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:22.529 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:22.529 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:22.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:22.529 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:34:22.529 04:51:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:22.788 04:51:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:23.047 04:51:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:23.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:23.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:23.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:23.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:34:23.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:34:23.047 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:23.047 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:23.047 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:23.047 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:23.047 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:23.048 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:23.048 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:23.048 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:23.048 ' 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:34:29.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:34:29.617 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:29.617 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:29.617 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 509770 ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509770' 00:34:29.617 killing process with pid 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 509770 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:29.617 rmmod nvme_rdma 00:34:29.617 rmmod nvme_fabrics 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.617 04:51:07 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:29.618 00:34:29.618 real 0m24.727s 00:34:29.618 user 0m54.594s 00:34:29.618 sys 0m6.443s 00:34:29.618 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.618 04:51:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:29.618 ************************************ 00:34:29.618 END TEST spdkcli_nvmf_rdma 00:34:29.618 ************************************ 00:34:29.618 04:51:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:29.618 04:51:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:29.618 04:51:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:29.618 04:51:07 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:29.618 04:51:07 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:29.618 04:51:07 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:29.618 04:51:07 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:29.618 04:51:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.618 04:51:07 -- common/autotest_common.sh@10 -- # set +x 00:34:29.618 04:51:07 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:29.618 04:51:07 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:29.618 04:51:07 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:29.618 04:51:07 -- common/autotest_common.sh@10 -- # set +x 00:34:36.185 INFO: APP EXITING 00:34:36.186 INFO: killing all VMs 00:34:36.186 INFO: killing vhost app 00:34:36.186 INFO: EXIT DONE 00:34:39.477 Waiting for block devices as requested 00:34:39.477 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:39.477 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:39.736 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.736 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:39.736 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:39.995 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:39.995 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:39.995 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:40.254 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:40.254 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:40.254 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:44.451 Cleaning 00:34:44.451 Removing: /var/run/dpdk/spdk0/config 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:44.451 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:44.451 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:44.451 Removing: /var/run/dpdk/spdk1/config 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:44.451 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:44.451 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:44.451 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:44.451 Removing: /var/run/dpdk/spdk2/config 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:44.451 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:44.451 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:44.451 Removing: /var/run/dpdk/spdk3/config 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:44.451 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:44.451 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:44.451 Removing: /var/run/dpdk/spdk4/config 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:44.451 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:44.451 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:44.451 Removing: /dev/shm/bdevperf_trace.pid148917 00:34:44.451 Removing: /dev/shm/bdev_svc_trace.1 00:34:44.451 Removing: /dev/shm/nvmf_trace.0 00:34:44.451 Removing: /dev/shm/spdk_tgt_trace.pid104677 00:34:44.451 Removing: /var/run/dpdk/spdk0 00:34:44.451 Removing: /var/run/dpdk/spdk1 00:34:44.451 Removing: /var/run/dpdk/spdk2 00:34:44.451 Removing: /var/run/dpdk/spdk3 00:34:44.451 Removing: /var/run/dpdk/spdk4 00:34:44.451 Removing: /var/run/dpdk/spdk_pid101926 00:34:44.451 Removing: /var/run/dpdk/spdk_pid103194 00:34:44.451 Removing: /var/run/dpdk/spdk_pid104677 00:34:44.451 Removing: /var/run/dpdk/spdk_pid105135 00:34:44.451 Removing: /var/run/dpdk/spdk_pid106218 00:34:44.451 Removing: /var/run/dpdk/spdk_pid106251 00:34:44.451 Removing: /var/run/dpdk/spdk_pid107354 00:34:44.451 Removing: /var/run/dpdk/spdk_pid107364 00:34:44.451 Removing: /var/run/dpdk/spdk_pid107753 00:34:44.451 Removing: /var/run/dpdk/spdk_pid112864 00:34:44.451 Removing: /var/run/dpdk/spdk_pid114381 00:34:44.451 Removing: /var/run/dpdk/spdk_pid114770 00:34:44.451 Removing: /var/run/dpdk/spdk_pid114983 00:34:44.451 Removing: /var/run/dpdk/spdk_pid115327 00:34:44.451 Removing: /var/run/dpdk/spdk_pid115653 00:34:44.451 Removing: /var/run/dpdk/spdk_pid115938 00:34:44.451 Removing: /var/run/dpdk/spdk_pid116230 00:34:44.451 Removing: /var/run/dpdk/spdk_pid116550 00:34:44.451 Removing: /var/run/dpdk/spdk_pid117169 00:34:44.451 Removing: /var/run/dpdk/spdk_pid120309 00:34:44.451 Removing: /var/run/dpdk/spdk_pid120605 00:34:44.451 Removing: /var/run/dpdk/spdk_pid120893 00:34:44.451 Removing: /var/run/dpdk/spdk_pid120900 00:34:44.451 Removing: /var/run/dpdk/spdk_pid121409 00:34:44.451 Removing: /var/run/dpdk/spdk_pid121471 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122042 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122048 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122346 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122478 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122650 00:34:44.451 Removing: /var/run/dpdk/spdk_pid122757 00:34:44.451 Removing: /var/run/dpdk/spdk_pid123289 00:34:44.451 Removing: /var/run/dpdk/spdk_pid123570 00:34:44.451 Removing: /var/run/dpdk/spdk_pid123908 00:34:44.452 Removing: /var/run/dpdk/spdk_pid127802 00:34:44.452 Removing: /var/run/dpdk/spdk_pid132072 00:34:44.452 Removing: /var/run/dpdk/spdk_pid143076 00:34:44.452 Removing: /var/run/dpdk/spdk_pid143936 00:34:44.452 Removing: /var/run/dpdk/spdk_pid148917 00:34:44.452 Removing: /var/run/dpdk/spdk_pid149199 00:34:44.452 Removing: /var/run/dpdk/spdk_pid153475 00:34:44.452 Removing: /var/run/dpdk/spdk_pid159372 00:34:44.452 Removing: /var/run/dpdk/spdk_pid162109 00:34:44.452 Removing: /var/run/dpdk/spdk_pid172315 00:34:44.452 Removing: /var/run/dpdk/spdk_pid197841 00:34:44.452 Removing: /var/run/dpdk/spdk_pid201649 00:34:44.452 Removing: /var/run/dpdk/spdk_pid296481 00:34:44.452 Removing: /var/run/dpdk/spdk_pid301835 00:34:44.452 Removing: /var/run/dpdk/spdk_pid307539 00:34:44.452 Removing: /var/run/dpdk/spdk_pid316390 00:34:44.452 Removing: /var/run/dpdk/spdk_pid348062 00:34:44.452 Removing: /var/run/dpdk/spdk_pid353684 00:34:44.452 Removing: /var/run/dpdk/spdk_pid394837 00:34:44.452 Removing: /var/run/dpdk/spdk_pid395783 00:34:44.452 Removing: /var/run/dpdk/spdk_pid396749 00:34:44.452 Removing: /var/run/dpdk/spdk_pid397753 00:34:44.452 Removing: /var/run/dpdk/spdk_pid402507 00:34:44.452 Removing: /var/run/dpdk/spdk_pid408795 00:34:44.452 Removing: /var/run/dpdk/spdk_pid415793 00:34:44.452 Removing: /var/run/dpdk/spdk_pid416767 00:34:44.452 Removing: /var/run/dpdk/spdk_pid417574 00:34:44.452 Removing: /var/run/dpdk/spdk_pid418618 00:34:44.452 Removing: /var/run/dpdk/spdk_pid418901 00:34:44.452 Removing: /var/run/dpdk/spdk_pid423415 00:34:44.452 Removing: /var/run/dpdk/spdk_pid423417 00:34:44.452 Removing: /var/run/dpdk/spdk_pid427962 00:34:44.452 Removing: /var/run/dpdk/spdk_pid428498 00:34:44.452 Removing: /var/run/dpdk/spdk_pid429031 00:34:44.452 Removing: /var/run/dpdk/spdk_pid429820 00:34:44.452 Removing: /var/run/dpdk/spdk_pid429832 00:34:44.452 Removing: /var/run/dpdk/spdk_pid432226 00:34:44.452 Removing: /var/run/dpdk/spdk_pid434625 00:34:44.452 Removing: /var/run/dpdk/spdk_pid436479 00:34:44.452 Removing: /var/run/dpdk/spdk_pid438339 00:34:44.452 Removing: /var/run/dpdk/spdk_pid440191 00:34:44.452 Removing: /var/run/dpdk/spdk_pid442066 00:34:44.452 Removing: /var/run/dpdk/spdk_pid448188 00:34:44.452 Removing: /var/run/dpdk/spdk_pid448843 00:34:44.452 Removing: /var/run/dpdk/spdk_pid451123 00:34:44.452 Removing: /var/run/dpdk/spdk_pid452319 00:34:44.712 Removing: /var/run/dpdk/spdk_pid459292 00:34:44.712 Removing: /var/run/dpdk/spdk_pid461997 00:34:44.712 Removing: /var/run/dpdk/spdk_pid467401 00:34:44.712 Removing: /var/run/dpdk/spdk_pid478278 00:34:44.712 Removing: /var/run/dpdk/spdk_pid478305 00:34:44.712 Removing: /var/run/dpdk/spdk_pid498394 00:34:44.712 Removing: /var/run/dpdk/spdk_pid498664 00:34:44.712 Removing: /var/run/dpdk/spdk_pid504704 00:34:44.712 Removing: /var/run/dpdk/spdk_pid505066 00:34:44.712 Removing: /var/run/dpdk/spdk_pid506994 00:34:44.712 Removing: /var/run/dpdk/spdk_pid509770 00:34:44.712 Clean 00:34:44.712 04:51:23 -- common/autotest_common.sh@1453 -- # return 0 00:34:44.712 04:51:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:44.712 04:51:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:44.712 04:51:23 -- common/autotest_common.sh@10 -- # set +x 00:34:44.712 04:51:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:44.712 04:51:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:44.712 04:51:23 -- common/autotest_common.sh@10 -- # set +x 00:34:44.712 04:51:23 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:44.712 04:51:23 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:34:44.712 04:51:23 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:34:44.712 04:51:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:44.712 04:51:23 -- spdk/autotest.sh@398 -- # hostname 00:34:44.712 04:51:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:34:44.972 geninfo: WARNING: invalid characters removed from testname! 00:35:06.916 04:51:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:07.484 04:51:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:09.389 04:51:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:11.294 04:51:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:12.674 04:51:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:14.579 04:51:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:16.485 04:51:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:16.485 04:51:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:16.485 04:51:54 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:35:16.485 04:51:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:16.485 04:51:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:16.485 04:51:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:35:16.485 + [[ -n 6771 ]] 00:35:16.485 + sudo kill 6771 00:35:16.495 [Pipeline] } 00:35:16.511 [Pipeline] // stage 00:35:16.516 [Pipeline] } 00:35:16.531 [Pipeline] // timeout 00:35:16.536 [Pipeline] } 00:35:16.550 [Pipeline] // catchError 00:35:16.555 [Pipeline] } 00:35:16.570 [Pipeline] // wrap 00:35:16.576 [Pipeline] } 00:35:16.589 [Pipeline] // catchError 00:35:16.599 [Pipeline] stage 00:35:16.601 [Pipeline] { (Epilogue) 00:35:16.614 [Pipeline] catchError 00:35:16.616 [Pipeline] { 00:35:16.628 [Pipeline] echo 00:35:16.630 Cleanup processes 00:35:16.636 [Pipeline] sh 00:35:16.924 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:16.924 530048 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:16.938 [Pipeline] sh 00:35:17.225 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:17.225 ++ grep -v 'sudo pgrep' 00:35:17.225 ++ awk '{print $1}' 00:35:17.225 + sudo kill -9 00:35:17.225 + true 00:35:17.237 [Pipeline] sh 00:35:17.524 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:17.524 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:35:22.797 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:35:27.008 [Pipeline] sh 00:35:27.294 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:27.294 Artifacts sizes are good 00:35:27.309 [Pipeline] archiveArtifacts 00:35:27.317 Archiving artifacts 00:35:27.713 [Pipeline] sh 00:35:28.001 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:35:28.019 [Pipeline] cleanWs 00:35:28.031 [WS-CLEANUP] Deleting project workspace... 00:35:28.031 [WS-CLEANUP] Deferred wipeout is used... 00:35:28.038 [WS-CLEANUP] done 00:35:28.040 [Pipeline] } 00:35:28.058 [Pipeline] // catchError 00:35:28.073 [Pipeline] sh 00:35:28.363 + logger -p user.info -t JENKINS-CI 00:35:28.373 [Pipeline] } 00:35:28.389 [Pipeline] // stage 00:35:28.395 [Pipeline] } 00:35:28.411 [Pipeline] // node 00:35:28.417 [Pipeline] End of Pipeline 00:35:28.472 Finished: SUCCESS